The Journal of Extension - www.joe.org

June 2015 // Volume 53 // Number 3 // Ideas at Work // v53-3iw3

Using Evaluation to Guide and Validate Improvements to the Utah Master Naturalist Program

Abstract
Integrating evaluation into an Extension program offers multiple opportunities to understand program success through achieving program goals and objectives, delivering programming using the most effective techniques, and refining program audiences. It is less common that evaluation is used to guide and validate the effectiveness of program revision. Early evaluation results from Utah Master Naturalist Watersheds classes were used to make specific, targeted program revisions, and significant increases in recent evaluation results validated that the revisions were successful. Using evaluation in this way conserves time, effort, and resources, and helps achieve a high level of program success expected from Extension professionals.


Mark Larese-Casanova
Extension Assistant Professor
Utah State University
Logan, Utah
marklc@usu.edu

Introduction

Evaluation can serve many purposes for an Extension program. In the simplest of terms, evaluation is necessary to ensure program accountability (McKenna, 1983). That is, evaluation is a means by which we not only convey that proposed work was fulfilled, but also measure whether program goals and objectives were met and outcomes or impacts were achieved (Flowers, 2010; Rossi, Freeman, & Lipsey, 2003; Van Den Berg & Dann, 2008; Workman & Scheer, 2012). In addition to measuring outcomes, evaluation is essential in determining the level of success of the tools and strategies used to deliver an Extension program (Brown & Kiernan, 1998; Bush, Mullis, & Mullis, 1995). Program evaluation can also be used to further refine a target audience for an Extension program (Brown & Kiernan, 1998; Larese-Casanova, 2011), or identify audiences for future Extension programming (Taylor, 2008).

While evaluation is widely used to measure outcomes and impacts of an Extension program, relatively few studies describe the use of evaluation results to guide program improvement. Chapman-Novakofski et al. (2004) described the use of evaluation results in the revision of both program content and evaluation tools. The efforts described provide not only an example of using evaluation results to guide the revision of an Extension program, but also statistical evidence through further evaluation that the program revisions were successful.

Evaluation of the Utah Master Naturalist Program

The Utah Master Naturalist Program (UMNP) is a Utah State University (USU) Cooperative Extension program that promotes stewardship of Utah's natural world. The UMNP achieves this goal through increasing awareness and knowledge of Utah's watershed, desert, and mountain ecosystems and associated issues. Additional priorities of the program include initiating or continuing a life-long learning process, developing skills to address environmental issues, and promoting active involvement in creating a positive impact on Utah's natural world.

The Utah Master Naturalist Program was largely successful in terms of participant knowledge gain and evaluation (Larese-Casanova, 2011). However, some mean evaluative measures were relatively lower than others, suggesting that the UMNP could be improved in the following areas:

  • Meeting expectations and personal goals of the participants
  • Using field experiences to apply knowledge gained in the classroom
  • Inspiring volunteerism

Program Revision Guided by Evaluation

While it was clear that participants were gaining knowledge, targeted revisions were made to the delivery of the UMNP Watersheds class in 2009-2011 in an effort to improve program success. One attempted improvement to meet audience expectations and fulfill personal goals was to have participants read an introductory letter that explained the UMNP goals and objectives prior to registering for a class. Additionally, pre-surveys were conducted or group discussions were initiated at the start of a class to assess participant expectations, and individual class syllabi were adapted to fulfill these expectations.

Additional attempts to improve the UMNP involved maximizing learning opportunities in the field. In planning more relevant and focused field trips to increase connections between classroom instruction and field experiences, each field trip was required to meet as many program objectives as possible. Additionally, time spent learning in the classroom was reduced, and readings on the fundamental concepts were assigned prior to each day of class. The fundamental concepts were then applied and reinforced during the field experiences. This not only allowed for more time spent learning in the field, but also required the participants to have a more active role in the learning process.

Efforts to promote volunteering were increased by incorporating service learning and citizen science opportunities into UMNP classes. Service projects included assisting with invasive weed management (e.g., dyer's woad pulls) or conducting rare species inventories (e.g., boreal toad surveys), which gave participants valuable learning opportunities and contributed to conservation objectives of state agencies. Citizen science opportunities included participating in established programs, including Utah Water Watch, FrogWatch USA, and Beaver Monitoring coordinated by USU Water Quality Extension.

Further Evaluation to Validate Program Revision

UMNP Watersheds classes during 2012-2013 were evaluated using the same methods as in Larese-Casanova (2011) to determine whether the program revisions were successful. In particular, it was important to evaluate whether implementing new teaching methods (e.g., teaching outside of the classroom, assigning readings before class, service learning) resulted in a significant increase in evaluation results (Bush, Mullis, & Mullis, 1995).

Recent evaluation results from 2012-2013 were compared to early results from 2007-2008 to evaluate the success of revisions to the UMNP (Roucan-Kane, 2008). While there was no control group, recent evaluation data analysis were still useful in comparing to early evaluation data collected using the same methods, despite the quasi-experimental nature of the analysis (Diem, 2002). Because of the a priori prediction that mean evaluation results would increase with program revision, one-tailed t-tests were used in statistical comparisons.

The scale of responses to evaluation statements ranged from -2 (strongly disagree) to 2 (strongly agree), with 0 being neutral (Larese-Casanova, 2011). When the two data sets were compared, a significant increase in mean evaluation results was detected for all evaluation parameters related to the targeted program revisions (Table 1).

In addition, there were significant increases in mean evaluation results that indicated that participants had a greater understanding of Utah's watershed ecosystems and that they were inspired to think more about their own use of natural resources (Table 1). While these parameters were not the focus of the targeted program revisions, it is likely that the efforts to increase time spent participating in more focused field experiences, service learning, and citizen science contributed to improving the UMNP Watersheds classes in these areas.

Table 1.
Comparison of Evaluation Results Before and After Program Revisions
Evaluation Parameter 2007-2008 Mean (SE) 2012-2013 Mean (SE) p value t (df)
Meeting participants' expectations 1.24 (0.15) 1.93 (0.05) 0.0004 3.68 (72)
Meeting participants' personal goals 1.38 (0.11) 1.76 (0.10) 0.01 2.40 (69)
Field trips applied knowledge gained in class 1.59 (0.09) 1.89 (0.06) 0.007 2.51 (70)
Inspiring volunteerism 1.23 (0.15) 1.59 (0.14) 0.049 1.68 (71)
Greater understanding of watershed ecosystems 1.73 (0.08) 1.93 (0.5) 0.035 1.84 (72)
Considering their own use of natural resources 1.57 (0.10) 1.83 (0.09) 0.036 1.83 (71)

Summary

The evaluation process used with the UMNP provides quantitative evidence that focused program revisions can result in a measurable improvement to program success. Through relatively simple statistical analyses of evaluation data, an Extension educator can identify specific components of an Extension program to which he or she may make focused efforts for improvement. This not only conserves time, effort, and resources, but also facilitates achievement of a high level of program success expected of Extension educators across the country.

References

Brown, J. L., & Kiernan, N. E. (1998). A model for integrating program development and evaluation. Journal of Extension [On-line], 36(3). Article 3RIB5. Available at: http://www.joe.org/joe/1998june/rb5.php

Bush, C., Mullis, R., & Mullis, A. (1995). Evaluation: An afterthought or an integral part of program development. Journal of Extension [On-line], 33(2). Article 2FEA4. Available at: http://www.joe.org/joe/1995april/a4.php

Chapman-Novakofski, K., DeBruine, V., Derrick, B., Karduck, J., Todd, J., & Todd, S. (2004). Using program evaluation to guide program content: Diabetes Education. Journal of Extension [On-line], 42(3). Article 3IAW1. Available at: http://www.joe.org/joe/2004june/iw1.php

Flowers, A. B. (2010). Blazing an evaluation pathway: Lessons learned from applying utilization-focused evaluation to a conservation education program. Evaluation and Program Planning. 33(2010). 165–171.

Diem, K. G. (2002). Using research methods to evaluate your Extension program. Journal of Extension [On-line]. 40(6). Article 6FEA1. Available at: http://www.joe.org/joe/2002december/a1.php

Larese-Casanova, M. (2011). Assessment and evaluation of the Utah Master Naturalist Program: implications for targeting audiences. Journal of Extension [On-line], 49(5), Article 5RIB2. Available at: http:// www.joe.org/joe/2011october/rb2.php

McKenna, C. (1983). Evaluation for accountability. Journal of Extension [On-line], 21(5). Available at: http://www.joe.org/joe/1983september/index.php

Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (2003). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage Publications.

Roucan-Kane, M. (2008). Key Facts and Key Resources for Program Evaluation. Journal of Extension [On-line], 46(1). Article 1TOT2. Available at: http://www.joe.org/joe/2008february/tt2.php

Taylor, G. D. (2008). Better Extension programming through statistics: Using statistical analysis of program evaluations to guide program development. Journal of Extension [On-line], 46(5). Article 5RIB4. Available at: http://www.joe.org/joe/2008october/rb4.php

Van Den Berg, H. A., & Dann, S. L. (2008). Evaluation of an adult extension education initiative: The Michigan Conservation Stewards Program. Journal of Extension [On-line], 46(2), Article 2RIB1. Available at: http://www.joe.org/joe/2008april/rb1.php

Workman, J. D., & Scheer, S. D. (2012). Evidence of impact: Examination of evaluation studies published in the Journal of Extension. Journal of Extension [On-line], 50(2), Article 2FEA1. Available at: http://www.joe.org/joe/2012april/a1.php