The Journal of Extension - www.joe.org

December 2013 // Volume 51 // Number 6 // Research In Brief // v51-6rb1

Does Evaluation Competence of Extension Educators Differ by Their Program Area of Responsibility?

Abstract
Raising evaluation capacity is not an easy task unless the evaluation competence of Extension professionals in each program area is identified. The study reported here identified whether level of knowledge and skills and needs for further training in program evaluation vary for Extension educators based on their program area. A total of 752 Extension educators participated in the study through an online survey. Differences were observed for evaluation competence and further training needs between Extension educators among program areas. Therefore, offering training for Extension educators based on evaluation competence and the area of responsibility is recommended for optimum training outcomes.


Nav R. Ghimire
Agriculture Agent
University of Wisconsin-Extension
Green Lake County, Wisconsin
nav.ghimire@ces.uwex.edu

Robert A. Martin
Professor
Department of Agricultural Education and Studies
Iowa State University
Ames, Iowa
drmartin@iastate.edu

Introduction

For more than 100 years, the Cooperative Extension Service has been assisting communities through educational outreach mainly in four different program areas: Agriculture and Natural Resources (ANRE), 4-H Youth Development (4-HYD), Family and Consumer Sciences (FCS), and Community and Economic Development (CED).

Extension educators in each program area assist communities to understand and address problems and develop solutions through educational program participation. One of the objectives of such programs is to evaluate their impact on lives of the people (Graham, 1994). In 1993, the Government Performance and Results Act (GPRA) (USDA, 1993) mandated Extension educators to communicate and document the impact of educational programs to key stakeholders (O'Neill, 1998; Radhakrishna, 2001). Since then, program funders and decision makers are increasingly evaluating Extension programs by linking budget allocations to program accomplishments and asking educators to quantify the impact of an educational program with a dollar figure (Bailey & Deen, 2002; Boyle, 1997; Franz & Cox, 2012). However, many Extension educators do not conduct meaningful evaluations, and most educational programs receive little or no evaluation (Barker & Killian, 2011; West, 2007).

The two main reasons for Extension educators placing inadequate attention on program evaluation are the lack of knowledge and skills and inadequate opportunities for improving their evaluation capacities (Chapman-Novakofski et al., 1997; Rennekamp & Engle, 2008). According to King and Cooksy (2008), Extension educators often use inappropriate performance indicators to measure program outcomes. Consequently, they have conveyed incorrect messages about program impacts to decision makers at various levels. These educators must possess necessary skills not only to implement the programs, but also to systematically evaluate program outcomes (Rennekamp & Arnold, 2009).

Building evaluation capacities of Extension educators is one of the important tasks for the Cooperative Extension Service in the United States. However, this task is not an easy one unless the evaluation competence of Extension educators in each program area is identified. One of the authors of this article is a member of the state level Cooperative Extension evaluation team. He found that each program area is unique in terms of educational offerings, nature of the program, clients served, human resources development, and professional culture. Individual program areas recruit Extension educators based on their academic background and experience, specific to a program area's needs; these educators often participate in professional development events specific to their required skill sets as organized by the program area departments to which they belong.

Therefore, authors of this article experienced that involving all Extension educators in a professional development plan with "one-model fits all" may not be useful for improving their evaluation capacities. Given this scenario, the study reported here aimed at identifying the evaluation competence of Extension educators in terms of their affiliation to individual program areas. The research findings may have implications for the Cooperative Extension Service designing successful professional development programs and using scarce resources for best training outcomes.

Objectives of the Study

The objectives of the study were to:

  1. Identify if the level of knowledge and skills in program evaluation differs for Extension educators based on their program area of responsibility.
  2. Identify if the level of needs for further training in program evaluation differs for Extension educators based on their program area of responsibility.
  3. Based on their program area of responsibility, determine if there is a statistically significant difference in respondents' mean ratings for knowledge and skills in program evaluation and their needs for further training.

Methods

Population and Sample

The target population for the study was all Extension educators based on county and regional Extension Services in 12 states of the North Central Region: Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin. Extension educators in county and regional Extension Services were included in the study because they are directly involved in program development and implementation. The study used a descriptive survey research design and was conducted with a census of 2,497 Extension educators. The sampling frame was the current list of Extension educators found in each state's Cooperative Extension Service website.

Instrumentation

The data collection instrument for the study was a closed-form questionnaire adapted from a competency research study by Ghimire and Martin (2011). The instrument included 11 competencies related to program evaluation (Table 1). These competencies were measured using a five-point Likert-type scale to identify respondents' level of knowledge and skills and to determine their needs for further training (1= very low, 2 = low, 3= moderate, 4= high, 5= very high). The instrument also included demographic questions such as major area of responsibility, years of experience, level of education, and gender.

A panel of five experts reviewed the survey instrument for face, content, and construct validity. The reviewers included three professors in Agricultural Education and two state-level Extension leaders. To determine the reliability of the instrument, a pilot study was conducted with 35 Extension educators. Sudman (1976) stated that a pilot test of 20-50 cases is sufficient to discover the major flaws in a questionnaire. The reliability coefficient for competency knowledge and skill was .94, and needs for further training was .95. According to George and Mallery (2003), a Cronbach's alpha ≥ 0.7 is appropriate for conducting a study.

Data Collection

A pre-notice email message was sent to potential respondents informing them about their potential participation in the study and its objectives. A week after the pre-notice email message, a cover letter with a link to the online questionnaire was emailed to participants. A total of three reminder email messages were sent to nonrespondents. Early and late respondents were categorized as suggested by Ary, Jacobs, and Sorensen (2010). A Mann-Whitney U-Test between early and late respondents did not yield any difference in their ratings for knowledge and skills and needs for further training in program evaluation. The response rate for the study was 30% (n = 752).

Data Analysis

Data from the questionnaire were coded and entered into SPSS 20 for analysis. Twenty-five survey responses were selected randomly and verified with the coded data to detect and correct any potential coding errors. Means and standard deviations were computed to identify respondents' competency knowledge and skill and their needs for further training (Objectives 1 and 2). A score was also reported as the summated mean score, which was further analyzed to determine differences in responses based on respondents' program area of responsibility by employing one-way analysis of variance and Bonferroni post-hoc test (Objective 3) (Boone & Boone, 2012; Clason & Dormody, 1994).

Results

Objective 1: Identify Extension Educators' Knowledge and Skills in Program Evaluation

Table 1 shows that for most competencies, the ANRE educators had the lowest mean scores indicating their evaluation knowledge and skills were between low and moderate level. In contrast, the CED educators reported highest mean scores in most competencies, with a level ranging from moderate to high knowledge and skills in given competencies. For each competency, the 4-HYD and the FCS educators had nearly similar mean scores ranging from low to moderate level.

Table 1.
Mean Ratings by Program Area of Extension Educators' Knowledge and Skills in Various Evaluation Competencies

Evaluation Competencies

ANRE (n = 236) 4-HYD (n = 191) FCS (n = 234) CED (n = 70)
M SD M SD M SD M SD
Analyze and interpret survey results 2.80 1.15 2.70 .96 2.59 1.02 3.39 1.08
Assess client expectations 2.92 .92 3.11 .92 3.04 .94 3.16 1.04
Assess impact of a program 2.76 .96 2.90 .85 2.89 .87 3.07 1.04
Assess learning outcomes 2.69 .87 2.94 .87 2.93 .88 3.17 .96
Develop survey instruments 2.70 1.09 2.67 .99 2.56 1.03 3.39 1.14
Evaluate results of extension activities 2.95 .94 2.95 .87 2.91 .90 3.16 .92
Evaluate your performance as an educator 2.97 .92 3.26 .84 3.17 .86 3.30 .96
Identify problems requiring additional research 2.93 1.02 2.62 .94 2.67 .97 3.07 1.08
Implement survey research 2.43 1.10 2.54 1.05 2.36 1.08 3.10 1.24
Use impact data for planning 2.73 1.02 2.96 .90 2.88 1.00 3.04 1.19
Use techniques to assess learner's reaction to learning experiences 2.61 .93 2.96 .85 2.90 .99 2.94 1.14
Summated Mean Score 2.77 .72 2.87 .66 2.80 .72 3.61 .83

Note: 1 = Very low, 2 = Low, 3 = Moderate, 4 = High, 5 = Very High. ANRE = Agriculture and Natural Resources, 4-HYD = 4-H Youth Development, FCS = Family Living and Consumer Sciences, CED = Community and Economic Development.

Objective 2: Identify Extension Educators' Self-Assessed Needs for Further Training in Program Evaluation

In most competencies, the ANRE, 4-HYD, and FCS educators reported higher mean scores with training needs score ranging between moderate and high level (Table 2). The CED educators reported a low to moderate need for further training in given competencies.

Table 2.
Mean Ratings by Program Area of Extension Educators' Further Trainings Needs in Various Evaluation Competencies

Evaluation Competencies

ANRE (n = 236) 4-HYD (n = 191) FCS (n = 234) CED (n = 70)
M SD M SD M SD M SD
Analyze and interpret survey results 3.08 1.04 3.24 1.04 3.21 1.00 2.59 1.04
Assess client expectations 3.03 .87 2.92 .90 2.85 .80 2.71 .93
Assess impact of a program 3.40 1.00 3.36 .98 3.20 .89 3.16 1.05
Assess learning outcomes 3.26 .86 3.21 .95 3.11 .84 2.76 .95
Develop survey instruments 3.28 1.07 3.41 .98 3.27 .99 2.71 1.06
Evaluate results of extension activities 3.21 .95 3.21 .98 3.14 .89 2.89 .92
Evaluate your performance as an educator 3.10 1.02 2.93 .96 2.85 .94 2.69 1.01
Identify problems requiring additional research 2.87 .91 3.08 .96 2.95 .88 2.87 1.07
Implement survey research 3.14 1.02 3.32 1.02 3.21 1.00 2.71 1.01
Use impact data for planning 3.08 .93 3.10 .95 3.04 .91 2.71 1.02
Use techniques to assess learner's reaction to learning experiences 3.16 .83 3.13 .94 3.00 .90 2.91 .91
Summated Mean Score 3.14 .72 3.17 .74 3.07 .64 2.79 .77

Note: 1 = Very low, 2 = Low, 3 = Moderate, 4 = High, 5 = Very High. ANRE = Agriculture and Natural Resources, 4-HYD = 4-H Youth Development, FCS = Family Living and Consumer Sciences, CED = Community and Economic Development.

Objective 3: Determine If There Is Statistically Significant Difference in Summated Mean Ratings for Knowledge and Skills in Program Evaluation and Needs for Further Training

The one-way analysis of variance revealed a statistically significant difference in summated mean ratings within respondents' program area of responsibility (Table 3). Further analysis with Bonferroni Posthoc test depicted a statistically significant difference for respondents' knowledge and skills between the following areas of responsibilities CED (M = 3.61) and 4-HYD (M = 2.87), FCS (M = 2.80), ANRE (M = 2.77). Similarly, a statistically significant difference was observed in respondents' needs for further training between CED (M = 2.79) and ANRE (M = 3.14), 4-HYD (M = 3.17), FCS (M = 3.07).

The Bonferroni posthoc test also revealed that Extension educators with seven or fewer years of experience needed the most professional development in program evaluation. It is interesting to note that statistical significance was not observed between gender and level of education for respondents' further training needs.

Table 3.
One-Way Analysis of Variance for Summated Mean Ratings by Respondents' Program Area of Responsibility

Source of Variance df MS F Sig.
Knowledge and skills in program evaluation Between Groups 3 2.92 5.56 .001
Within Groups 727 .52    
Needs for further training in program evaluation Between Groups 3 2.92 5.61 .001
Within Groups 727 .52    
Note. Area of Responsibility: 1= Agriculture and Natural Resources, 2 = 4-H Youth Development, 3= Family Living, 4 = Community and Economic Development.

Conclusions

Based on the findings from the study reported here, two main conclusions were derived. First, the level of evaluation competence and professional development needs of Extension educators differed based on their program area of responsibility. Second, varying degrees of professional development plans are needed for Extension educators in terms of their affiliation to individual program areas and their level of knowledge and skills in the given evaluation competencies intended for the professional training.

The study also validates the findings of studies conducted by McClure, Fuhrman, and Morgan (2012) in Georgia and by Ghimire and Trechter (2012) in Wisconsin. Both studies found that evaluation competence and professional development needs of Extension educators differ by their area of responsibility and that Extension professionals with primary responsibilities in Agriculture and Natural Resources needed the most assistance in program evaluation. Extension educators in Wisconsin reported that they were most likely to participate in further training focused on quantitative and qualitative data analysis, observational techniques, survey development, and focus group implementation.

Implications and Educational Significance

Findings of the study have important implications for state administrators and professional development leaders in Extension in the United States. First, self-assessed needs for further training in program evaluation indicate that Extension educators would respond positively to professional development programs offered on this topic. This relates to the implications for developing policies and guidelines in designing professional development trainings as well as for selecting, hiring, and promoting Extension staff in all program areas.

Second, in Ghimire and Martin's (2011) study, Extension educators identified their preferred competency acquisition venue as in-service training, graduate programs, and on the job. This means the findings have further implications for (1) developing in-service trainings in program evaluation in the Cooperative Extension Service, (2) designing evaluation courses in land-grant universities and colleges for students pursuing careers as Extension educators, and (3) creating an environment for learning evaluation skills on the job through the experiential learning process.

Third, the findings may also provide guidelines for private Extension organizations in the United States (such as agriculture cooperatives) identifying the organizational training priorities for continued professional development of their employees in program evaluation.

Recommendations

Extension leaders in the North Central Region (USA) should offer professional development programs in competencies related to program evaluation across the program areas, primarily to ANRE, 4-HYD, and FCS Extension educators. Focus of the professional training should be for the competencies with highest mean ratings for training needs. The program area directors should also critically assess their current professional development activities and revise the existing training curricula to include and /or emphasize the identified competencies.

Extension leaders should design flexible staff development training through in-service training, graduate programs, and on-the-job training. The experiential learning workshops and training programs held in the workplace would enhance employees' capacity for program evaluation through practice and experience. In addition, senior educators could serve as mentors for new employees to encourage them to learn evaluation competencies directly from field experience. This type of learning, however, requires teamwork and collaboration that fosters a feeling of psychological safety. The study reported here suggests that the universities and colleges with academic Extension education programs review their curricula to make sure that future Extension educators are trained well to assess program outcomes using scholarly evaluation methods. Offering courses on program evaluation is recommended, especially for colleges of agriculture (where many Extension services are housed).

References

Ary, D., Jacobs, L., & Sorensen, C. (2010). Introduction to research in education (8th ed.). Belmont, CA: Wadsworth.

Bailey, S. J., & Deen, M. Y. (2002). A framework for introducing program evaluation to Extension faculty and staff. Journal of Extension [On-line], 40(2), Article 2IAW1. Available at: http://www.joe.org/joe/2002april/iw1.php

Barker, W., & Killian, E. (2011). Tips and tools: The art of virtual program evaluation - measuring what we do with pizzazz. Journal of Extension [on-line], 49(1), Article 1TOT4. Available at: http://www.joe.org/joe/2011february/tt4.php

Boone, H. N., & Boone, D. A. ( 2012). Analyzing Likert data. Journal of Extension [On-line], 50(2), Article 2TOT2. Available at: http://www.joe.org/joe/2012april/tt2.php 

Boyle, P. (1997 May/June). What's the impact? Epsilon Sigma Phi newsletter, 68, 1- 4.

Clason, D. L., & Dormody, T. J. (1994). Analyzing data measured by individual Likert-type items. Journal of Agricultural Education, 35(4), 31-35.

Chapman-Novakofski, K., Boeckner, L. S., Canton, R., Clark, C. D., Keim, K., Britten, P., & McClelland, J. (1997). Evaluating evaluation - What we've learned. Journal of Extension [On-line], 35(1), Article 1RIB2. Available at: http://www.joe.org/joe/1997february/rb2.php

Franz, N. K., & Cox, R. A. (2012). Extension's future: Time for disruptive innovation. Journal of Extension [On-line], 50(2), Article 2COM1. Available at: http://www.joe.org/joe/2012april/comm1.php

George, D., & Mallery, P. (2003). SPSS for windows step by step—A simple guide and reference (4th ed.). New York: Pearson Education.

Ghimire, N. R., & Martin, R. A. (2011). A professional competency development model: Implications for Extension educators. Journal of International Agricultural and Extension Education, 18(2), 5-17. 

Ghimire, N. R., & Trechter, D. (2012). Extension educators and program evaluation - Results of a UW-Extension survey. University of Wisconsin Cooperative Extension Service, Department of Program Development and Evaluation, Evaluation Leadership and Support Team. An Unpublished Manuscript.

Graham, D. L. (1994). Cooperative Extension System. In C. J. Arntzen (Ed.), Encyclopedia of agricultural science (Vol. 1, pp. 415-430). New York: Academic Press.

King, N. J., & Cooksy, L. J. (2008). Evaluating multilevel programs. New Directions for Evaluation, 120, Winter, 27-39.

McClure, M. M., Fuhrman, N. E., & Morgan, A. C. (2012). Program evaluation competencies of Extension professionals: Implications for continuing professional development. Journal of Agricultural Education, 53(4), 85-97.

O'Neill, B. (1998). Money talks: Documenting the economic impact of Extension personal finance programs. Journal of Extension [On-line], 36(5), Article 5FEA2. Available at: http://www.joe.org/joe/1998october/a2.php

Radhakrishna, R. B. (2001). Professional development needs of the state Extension specialist. Journal of Extension [On-line], 39(5), Article 5RIB4. Available at: http://www.joe.org/joe/2001october/rb4.html

Rennekamp, R. A., & Engle, M. (2008). A case study of organizational change: Evaluation in Cooperative Extension. New Directions for Evaluation, 120, Winter, 15-26.

Rennekamp, R. A., & Arnold, M. E. (2009). What progress, program evaluation? Reflections on a quarter-century of Extension evaluation practice. Journal of Extension [On-line], 47(3), Article 3COM1. Available at: http://www.joe.org/joe/2009june/comm1.php

Sudman, S. (1976). Applied sampling. New York: Academic Press.

United States Department of Agriculture (USDA) (1993). The Government Performance and Results Act of 1993. Washington, D. C.

West, B. C. (2007). Conducting program evaluation using the internet. Journal of Extension [On-line], 45(1), Article 1TOT3. Available at: http://www.joe.org/joe/2007february/tt3.php