June 1999 // Volume 37 // Number 3 // Research in Brief // 3RIB1

Previous Article Issue Contents Previous Article

Program Evaluation and Accountability Training Needs of Extension Agents

Abstract
In recent years, there has been an increased emphasis to document program results and impact. Extension agents should not only be aware of various evaluation methods, but also be able to use those methods to document program results. A survey of Clemson University Extension agents was conducted to determine in-service training needs relative to program evaluation, accountability and research methods. Sixty-two percent of the agents responded to the survey. Major findings were: (a) agents expressed a greater need for inservice training in specific areas of program evaluation and research methods, (b) agents preferred receiving training via workshops, short courses, video, and seminars, (c) agents were very receptive to the idea of publishing a newsletter focusing on program evaluation. Two issues of a quarterly newsletter EVALNEWS has been published and several inservice training programs have been offered based on the specific needs expressed by agents.


Rama Radhakrishna
Program Evaluation Specialist
Internet address: RRDHKRS@Clemson.edu

Mary Martin
Grant Development Specialist

Clemson University Cooperative Extension Service
Clemson University
Clemson, South Carolina


Introduction

Educational programs delivered by Extension agents today are more varied than ever and will continue to change to meet the needs of the clientele they serve. Coupled with a diversity of programs, there has been an increased emphasis from government and other public agencies on program performance and accountability (Ladewig, 1997). The enactment of the Government Performance and Results Act (GPRA) in 1993 is a good example. Funders, policy makers, and decision makers want data relating to program results, impacts, and social and economic consequences. The number of programs offered, participants reached, hours worked, and dollars spent is no longer adequate to assess program effectiveness (Radhakrishna, 1997).

The most frequently asked question for Extension professionals is "What happened as a result of your program?" Extension administrators, faculty, specialists, and agents are constantly hearing buzz words such as documentation, impact, outcome, output, effectiveness, and accountability. This expanded requirement to document program results and impacts calls for use of multiple methods of evaluation.

Several studies have identified lack of time, lack of resources, and limited expertise in evaluation methodology, such as developing surveys, data analysis, and reporting as factors inhibiting agents from conducting program evaluation ( M. J. Depp, personal communication, August, 1996; Kiernan, et al., 1996; and H. Ott, personal communication, August, 1996). There is a perception among Extension agents that program evaluation means their own performance evaluation (Ott, personal communication, August, 1996). In addition, an attitude is developing among agents that GPRA is just another reporting mechanism. There is a need to convey to the agents that GPRA is more than just reporting, and emphasize up-front how GPRA reports can be used to improve Extension programs, reallocate resources, and to make managerial and personnel decisions (J. Walker, personal communication, November, 1997).

Extension agents must possess the necessary skills and experiences to not only conduct their programs but also systematically evaluate program outcomes and impact. In addition, agents and program leaders must possess the skills and expertise needed to use evaluation results to their stakeholders--funders, administrators, legislators, government officials and clientele.

Identification of program evaluation and accountability training needed to meet the expanded requirements will go a long way toward improving the skills of Extension agents to conduct quality evaluations and to document program results and impacts. As Extension agents consider various methods to evaluate programs, inquiries must be made to identify needs relative to program evaluation and how those needs can be met. As indicated by Lentz (1983), the purpose of identifying needs is to build a foundation for providing in-service education. Such identification will assist staff development leaders in establishing priorities and designing activities. In a related study, Barrick and Powell (1986) suggested that the strength of in-service training/workshops and the follow-up evaluation depends upon planning and planning depends on assessing needs.

Purpose and Objectives

The overall purpose of this study was to assess training needs of Extension agents relative to program evaluation and accountability. The first objective was to determine agents' need for in-service education in the areas of program evaluation and accountability, research methodology, and curriculum design. The second objective was to identify delivery methods that agents find most useful to receive in-service training. The third objective was to determine the best type of in-service activities such as location, timing of in-service offerings, idea of publishing a newsletter, and contributing to the newsletter.

Methods and Procedures

The population for the study consisted of all Extension agents (N = 210) employed at Clemson University. The frame was obtained from the personnel office. A mail survey instrument was developed to collect data for the study. The instrument had four sections: (a) in-service education needs; (b) delivery methods for in-service training; (c) demographic and program information; and (d) in-service education information. Items in sections 1 and 2 were measured on a five-point Likert scale. The instrument was validated for content and face validity by a panel of five experts. Data were collected through a mail survey. A cover letter and a copy of the survey were mailed to the population in November 1997. After the initial mailing and two follow-ups (sending another copy of survey and electronic mail messages), a total of 130 agents responded to the survey for a response rate of 62%. Early and late respondents were compared on variables identified in sections 1 and 2 as per procedures suggested by Miller and Smith (1983). No significant differences were found between the two groups and, as such, the data were generalized to the population. Data were analyzed using descriptive statistics.

Results/Findings

Demographic Profile of Agents

A majority of the respondents were male (52%). Respondents averaged 16 years of work experience. Family living/home economics was the primary area of program responsibility for 30 agents (24%), 4-H/youth development for 25 agents (20%), agriculture for 25 agents (20%), horticulture for 15 agents (12%), forestry, natural resource development and community development for 12 agents (9%). Nineteen agents (15%) reported "other" category (dairy, animal science, etc.). A majority of agents (63%) reported master's degree as their highest level of education, followed by bachelors (32%), Ph.D.(3%) and vocational certification (2%).

Objective 1--In-service Training Needs

Agents were asked to indicate on a five-point scale, 1 (not at all needed) to 5 (very much a need), the extent they need in-service training in three areas--program evaluation and accountability, research methods, and curriculum design. As shown in Table 1, more than one-half indicated a "moderate to very much need" for in-service training in developing evaluation plans (65%), focusing and organizing evaluations (62%), preparing evaluation reports (57%), and using evaluation reports (54%). In the research methods category, agents expressed a need for in-service training in designing questions and surveys (59%), analyzing and interpreting results (50%), and data collection methods (48%). Agents indicated "somewhat" a need for in-service training in curriculum design and learning experiences.

Objective 2--Delivery Methods

Agents were asked to indicate on a scale, 1(not at all useful) to 5 (most useful) the delivery methods they find useful to receive in-service training. Results are shown in Table 2. Workshops as a delivery method was found to be most useful by a majority of agents (76%), followed by short courses (57%), seminars (56%) and video conferences (54%).

Table 1
in-service Training Needs of Extension Agents (N = 130
No/Little Need Some Need Moderate/Very
Much Need
--------------------%-----------------
Program Evaluation and Accountability
   1. Developing evaluation plans 11 24 65
   2. Focusing and organizing evaluations 12 26 62
   3. Preparing evaluation reports 13 30 57
   4. Using evaluation results 18 28 54
   5. Accountability 26 27 47
   6. Conducting needs assessments 22 32 46
   7. Writing program objectives 24 32 44
Research Methodology
   1. Designing questions and surveys 22 19 59
   2. Analyzing and interpreting results 16 34 50
   3. Data collection methods 23 29 48
   4. Sampling techniques 29 28 43
   5. Conducting focus groups 33 32 35
Curriculum/Teaching/Learning
   1. Curriculum design 29 30 41
   2. Identifying and organizing learning experiences 30 36 34
   3. Teaching skills 32 39 29
Note: The scale anchors were combined for ease of reporting: 1,2 = no/little
need; 3 = some need; and 4,5 = moderate/very much need
Table 2
Usefulness of Delivery Methods (N = 130)
Delivery Methods Not at all Useful Somewhat Useful Moderate/Very
Much Useful
--------------------%-----------------
1. Workshops 4 20 76
2. Short courses 16 27 57
3. Seminars 21 23 56
4. Video 20 26 54
5. Formal classes 46 31 23
6. Lecture 38 30 22
Note: The scale anchors were combined for ease of reporting: 1,2 = not at all useful; 3 = somewhat useful; and 4,5 = moderate/very much useful

Objective 3--In-service Training Activities

An overwhelming majority of agents (84%) preferred various locations in South Carolina to receive in-service training. The best time of the year to receive in-service training was winter for 48% of agents, followed by spring (19%), and fall (13%). By 71% agents supported the idea of publishing a quarterly newsletter focusing on program evaluation. Eighty percent of the agents said they would share evaluation results of their programs for publication in the newsletter.

Conclusions and Implications

Findings of this study clearly indicate a need for in-service training in program evaluation and accountability and research methods. Extension agents like to receive in-service training via workshops, seminars, short courses and video conference. Respondents are very receptive to the idea of a newsletter publication focusing on program evaluation and contributing their Extension program results to such a newsletter. The findings of this study have provided valuable information to staff development relative to offering in-service training programs. Based on the findings, the following recommendations are made and implemented.

Four major in-service education offerings were conducted in the last 10 months, focusing on needs expressed by agents--developing evaluation plans and writing questions and constructing questionnaires. Two in-service training programs were offered on planning and organizing program evaluations and writing questions and designing surveys. In addition, several workshops were conducted at the county/cluster level, depending on the needs of Extension agents. The feedback from participants was very positive and the overall ratings of the training were also high (6.4 on a seven-point scale).

The findings of this study have provided valuable information to offer in-service training based on: (a) subject matter topics relative to program evaluation and accountability and research methodology, and (b) primary area of program responsibility of Extension agents. This will go a long way in addressing specific and critical needs of Extension agents relative to program evaluation. For example, two in-service training programs are being planned exclusively for 4-H and agriculture agents.

In-service training via traditional methods such as workshops and seminars will be continued. Plans are being made to develop and offer two or three short courses (3-5 days' duration) in program evaluation and research methodology. In addition, three video teleconferences were aired during monthly staff conferences to answer specific questions related to plan of work and accountability.

A video, "Introduction to Program Evaluation and Accountability" is being developed and produced. Major topics included in the 2-3 hour video are: (a) early history of Extension program evaluation, (b) definitions and purposes of evaluation, (c) steps in conducting evaluation, (d) differences and similarities between evaluation and accountability, and (e) assessing impact. The accountability section of the video contained information on the plan of work and accomplishment indicators associated with the projects, GPRA goals and guidelines for reporting accomplishment data for state and federal requirements.

Two issues of the newsletter, EVALNEWS, have been published. The inaugural issue contained messages from the editor and Extension director, a summary of findings of the survey, tips to writing success stories, and other news items. The second issue included information about such actions as sharing evaluation results to stakeholders, two success stories written by Extension agents, and news items relative to accountability.

Results of this study also provided a basis to write a mini-grant under Alliance 2020, a Kellogg Foundation initiative, for offering in-service training, short courses, and video production. Finally, the findings of this study are being shared with staff development and Extension administration to garner support and make informed decisions in planning and offering in-service training programs in the future.

References

Barrick, K. R., & Powell, R. L. (1986). Assessing needs and planning in-service education for the first year vocational education teachers. Proceedings of the 13th Annual National Agricultural Education Research Meeting, 42-47.

Kiernan, N. E., Fennelly, K., Mulkeen, P., Mincemoyer, C., Cornell, A., Masters, S., Radhakrishna, R. B., Lewis, R., and Baggett, C. D. (1994). Youth program evaluation study. University Park, PA: The Pennsylvania State University.

Ladewig, H. (August 1997). Demonstrating accountability through collaboration and partnerships. Paper presented at the Joint Southern Region Program Committee Meeting, Tallahassee, FL.

Lentz, M. T. (1983). Needs assessment and data collection. In R. J. Mertz (Ed.), Staff development leadership: A resource book. Columbus: Ohio Department of Education.

Miller, L. E., & Smith, K. (1993). Handling non-response issues. Journal of Extension, 24, 11-13.

Radhakrishna, R. B. (1997). Program evaluation and accountability needs of Extension professionals in the 21st century. Unpublished report, Clemson University, Clemson, SC.

United States Department of Agriculture (1993). The Government Performance and Results Act of 1993. Washington, DC.