December 2008 // Volume 46 // Number 6 // Research in Brief // 6RIB4

Previous Article Issue Contents Next Article Printable PDF

Do Workshops Work for Building Evaluation Capacity Among Cooperative Extension Service Faculty?

Abstract
A case study used survey design (pre-test, satisfaction, and post-test) to determine if a 1-day workshop affected participants' skills and self-efficacy in regard to conducting evaluation and if workshop participants applied evaluation skills afterwards. Findings indicate that the workshop was effective in building self-efficacy; however, it did not sustain evaluation practice. Formal training may be necessary to develop skills such as logic modeling, data collection and analysis, and reporting findings to solidify evaluation competencies among participants. It is recommended that Extension faculty engage in continuing education in program evaluation as part of a career development ladder to build evaluation capacity.


Kathleen D. Kelsey
Professor
Oklahoma State University
Stillwater. Oklahoma
Kathleen.kelsey@okstate.edu


Introduction

The need for greater accountability, including outcome and impact reporting, has never been more important within the Cooperative Extension Service in an increasingly competitive and resource-lean environment. Program evaluation is a part of the land-grant university's tool box for ensuring accountability and documenting outcomes and impacts for community-based programs. The need for building evaluation capacity among Extension faculty is especially striking because a study conducted by the National Association of Extension 4-H Agents (2006) found that 80% of the respondents desired additional training in evaluation. Boyd, Guion, and Rennekamp (2005) found that only 17% of Extension evaluation specialists had earned an academic degree specifically in evaluation. The majority of Extension evaluation specialists (57%) seek out continuing education in evaluation theory and practice primarily by independent study.

Self-study is limited in its value when professionals lack a conceptual framework of the core principles of evaluation. Adding to the problem, the usefulness of how-to manuals is reduced by "the complexity of the methodology presented, lack of consideration of organizational capacity, resources, and skill levels" of persons appointed to conduct evaluation (Bozzo, 2000, p. 465). Bozzo also noted that the manuals they reviewed were of poor quality and that available resources were too complex for the layperson to use. Bozzo called for "organizations to take a more proactive role in building [evaluation] capacity" by training staff to facilitate evaluation processes "through education, training, and skill building" (p. 470).

Contributing to the effectiveness of self-study and workshops are one's beliefs about outcomes of such efforts. Self-efficacy is "the conviction that one can successfully execute the behavior required to produce outcomes" (Bandura, 1977, p. 193). One's self-efficacy toward a difficult task, such as successfully conducting a program evaluation, can be influenced by mastery experiences. Social persuasion or coaching affects self-efficacy as well as perseverance, sustained effort, and adversity.

VanDerZanden (2001) found that Master Gardener workshop participants experienced an increase in confidence after attending a workshop and delivering a training session in their counties. Mutchler, Anderson, Taylor, Hamilton, and Mangle (2006) found that youth who trained others to use computers increased their computer self-efficacy, as well as their computer knowledge. VanDerZanden and Mutchler et al. provide examples of what Bandura referred to as "mastery experiences" supported by coaching, which resulted in higher self-efficacy toward the task.

Building evaluation capacity within the land-grant university depends on several variables, including training in the theory and practice of evaluation, continuing education experiences such as self-study and workshops, and high self-efficacy toward applying lessons learned to conduct evaluation. Using these concepts, the purpose of the study reported here was to determine the impact of a day-long workshop on building evaluation capacity and self-efficacy among Extension faculty in one southern state. The specific research questions were to determine if 1) the workshop changed participants' self-efficacy in regard to conducting evaluation and 2) if participants applied skills taught during the workshop to evaluate programs.

Methods

The case study (Merriam, 1998) was set in the context of a day-long workshop designed to increase evaluation capacity among Extension faculty at a land-grant university. The workshop taught logic modeling, generating evaluation questions, data collection, and how to use evaluation findings to build support for programs.

Participants were asked to complete three surveys. Four months prior to the workshop Extension faculty (N=180) were emailed a pre-test for planning the content of the workshop. Fifty-four individuals returned the survey.

The customer satisfaction survey was used to determine satisfaction with the workshop presenter and content. The survey was administered at the conclusion of the workshop face-to-face to all participants (N=36). Thirty individuals returned the survey for a response rate of 83%.

The post-test survey was emailed to all participants (N=36) 4 months after the workshop. Twenty-three participants returned the survey (64% response rate). Non-response error was controlled by comparing early to late respondents (Lindner, Murphy, & Briers, 2001). No significant differences were found using an independent sample t-test to check for equal variance between early and late responders (alpha=0.05); thus, the results of the study can be generalized to workshop participants.

The pre- and post-test surveys were checked for face, content, and construct validity with a panel of experts (Extension state specialists in evaluation, Extension Directors, and the Director of Staff and Program Development). Twenty-four questions were asked using a Likert-type response set (strongly agree=4, agree=3, disagree=2, strongly disagree=1, and not applicable=0) (Table 1).

The qualitative data were collected as part of the post-test survey. Respondents were asked to write in responses to four open-ended questions. The questions were, 1) please tell me more about how the workshop impacted your self-confidence in regard to conducting evaluation, 2) list the evaluation skills you are using now that you learned during the workshop, 3) what other consequences have you experienced related to attending the workshop, and 4) comments or suggestions. The data were analyzed for themes and patterns using a qualitative data analysis program, ATLIS/ti® and reported in the aggregate.

The study was limited by the small sample size and was implemented in one state. While the results can only be generalized to the study sample, some analytical generalizations may be useful for planning evaluation capacity building activities that expand upon workshops.

Findings

The pre-test survey revealed that Extension faculty enjoyed evaluating their programs, that they could collect and analyze data to document their programs' outcomes and impacts, and that they would like to learn more about how to evaluate programs. They disagreed that they could develop a logic model for their programs or write an evaluation report. Overall, they did not see themselves as skilled evaluators.

Results from the customer satisfaction survey were positive, establishing that the workshop was not a barrier to developing evaluation skills or building self-efficacy among participants for conducting program evaluation. On a four-point Likert-type scale (strongly agree=4, agree=3, disagree=2, strongly disagree=1), participants reported that the presenter was an effective communicator (mean=3.7), presented material in an interesting way (mean=3.5), motivated participants to practice evaluating their programs (mean=3.4), presented material that was relevant to their education needs (mean=3.5), was an effective teacher (mean=3.6), that they would recommend this workshop to their colleagues (mean=3.6), and that they learned a lot from this workshop (mean=3.5).

Participants were asked to list two "of the coolest things that I learned today about evaluation." The 57 comments fell into several themes. The largest theme was I own my evaluation, it's my story. Participants reported feeling empowered to engage in evaluation because it could be used to tell their story regarding program outcomes. Participants reported that the workshop changed their view of evaluation, and taught them more about instrumentation, logic models, that evaluation could include qualitative data, that evaluation should be data-based, and that it is doable. Other comments included learning time management skills and the need to educate their administrators about evaluation expectations.

Results from the post-test survey (3 months after the workshop) revealed that the workshop participants had confidence in their ability to conduct program evaluation (collect, analyze, and report data); however, the survey results were not significantly different from the pre-test results (Table 1) at .05 alpha. The qualitative findings did highlight more subtle impacts of the workshop.

Qualitative data from the post-test survey indicated that 20 of the 23 participants increased their self-confidence toward conducting evaluation as a result of attending the workshop. The participants reported that the workshop "reinforced my ability" to do evaluation and served to refresh and confirm previously acquired skills, thus boosting self-efficacy toward evaluation. According to one participant, "I feel more confident in using tools other than boring surveys as evaluation instruments." Sixteen of the 23 participants listed 49 comments to the question: list the evaluation skills you are using now that you learned during the workshop. The skills included logic modeling, writing effective questions, collecting qualitative data, reporting results, pre-planning, using story telling in reporting impacts, and building support for programs.

Unintended consequences were listed by six participants and included using evaluation skills for performance appraisals, using a broader array of questionnaires, more focused programming, using parents to help collect evaluation data, and time management. A final open-ended question asked participants to list comments. The 10 comments ranged from feeling time pressure to conducting evaluation to "you reduced our fear of evaluation." A final comment summarized the qualitative data well:

Pretty good in-service overall. The main thing I got out of it was to keep it simple. I felt like many of the educators there wanted to make it harder than it needs to be. The main thing I got was to highlight and publicize the positives-notice and correct the negatives.
Table 1.
Survey Results for the Pre-and Post-Test

  Pre-Test Mean Post-Test Mean Difference
I have confidence that I can collect and analyze data to document my programs' outcomes and impacts. 2.7 2.7 0
I can write an evaluation report with ease. 2.5 2.7 .2
I enjoy evaluating my programs. 2.4 2.7 .3
I am a skilled evaluator. 2.2 2.4 .2
I can create a logic model for my programs. 2.2 3.0 .8
I have developed a plan of action (model) for evaluating my programs. 3.2 2.5 .7
I can write evaluation questions to learn about my program's effectiveness and impacts. 3.2 3.0 .2
I can budget for an evaluation study. 3.0 2.2 .8
I understand how to establish indicators for measuring the long-term program outcomes. 3.2 2.9 .3
I can identify secondary data sources for outcome measurement. 2.8 2.9 .1
I can collect qualitative data (observations, interviews, focus groups, listening sessions). 2.9 3.2 .3
I can collect quantitative data (surveys, questionnaires, tests and assessments). 2.9 3.4 .5
I can analyze qualitative data (coding for themes and patterns in the data). 2.9 2.6 .3
I can analyze quantitative data (descriptive statistics). 2.9 3.0 .1
I can report evaluation findings (presenting the data and recommendations for improving practice). 3.1 3.0 .1
I can use evaluation findings to improve my programs. 3.1 3.1 .0
I can use evaluation data to get financial support for my programs. 3.2 2.6 .9
I can gather stakeholder support for evaluation work. 3.1 2.8 .3
I involve the community in evaluation work. 2.9 2.6 .3
4=strongly agree, 3=agree, 2=disagree, 1=strongly disagree.

Conclusions and Recommendations

Consistent with the literature (VanDerZanden, 2001; Mutchler et al., 2006), the findings of the study indicate that the workshop was effective in building self-efficacy. However, long-term and sustained formal training is necessary to fully develop specific skills such as logic modeling, data collection and analysis, and reporting to solidify evaluation competencies among participants because there were no significant differences between the pre- and post-test.

These findings are consistent with Bozzo's (2000) conclusions that self-study and workshops are generally inadequate for deep learning in evaluation theory and practice. The participants reported that they enjoyed the workshop (highly satisfied) and that it increased their confidence in conducting evaluation; however, after 3 months had passed, few specific skills were reported as being practiced (qualitative data), and attitudes had not significantly changed from the pre-test to the post-test (Table 1).

While increasing self-efficacy is a necessary first step in developing skills, it is recommended that Extension educators engage in continuing education in program evaluation as part of a career development ladder to increase evaluation capacity within land-grant universities. Formal course work serves to facilitate mastery experiences (Bandura, 1977) for participants while providing feedback on participants' evaluation efforts in a supportive environment that could also serve as a learning community for county educators.

References

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191-215.

Boyd, H., Guion, L., & Rennekamp, R. (2005). An exploratory profile of extension evaluation professionals. Paper presented at the American Evaluation Association, Toronto, Canada. October 27, 2005.

Bozzo, S. L. (2000). Evaluation resources for nonprofit organizations: Usefulness and applicability. Nonprofit management and leadership, 10(4), 463-472.

Lindner, J. R., Murphy, T. H., & Briers, G. E. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42(4), 43-53.

Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco: Jossey-Bass Publishers.

Mutchler, M. S., Anderson, S. A., Taylor, U. R., Hamilton, W., & Mangle, H. (2006). Bridging the digital divide: An evaluation of a train-the-trainer, community computer education program for low-income youth and adults. Journal of Extension [On-line], 44(3) Article 3FEA2. Available at: http://www.joe.org/joe/2006june/a2.shtml

National Association of Extension 4-H Agents. (October 29, 2006). NAE4-HA Membership survey results. Public Relations and Information and Research, Evaluation and Programs Committees. United States Department of Agriculture. (For a copy of the report contact Dr. Susan Le Menestrel, National Program Leader, Youth Development Research, email: slemenestrel@csrees.usda.gov.)

VanDerZanden, A. M. (2001). Ripple effect training: Multiplying extensions' resources with veteran Master Gardeners as MG trainers. Journal of Extension [On-line], 39(3). Available at: http://www.joe.org/joe/2001june/rb1.html