February 1997 // Volume 35 // Number 1 // Research in Brief // 1RIB2

Previous Article Issue Contents Previous Article

Evaluating Evaluation -- What We've Learned

Abstract
Impact evaluation will be an important part of establishing program accountability within future Extension systems. Telephone interviews with Extension field staff and nutrition specialists who participated in field testing evaluation instruments highlight the benefits and barriers of evaluation processes. Perceived importance of general program evaluation processes are reported. Points to consider for the implementation of effective evaluation process are presented.


K. Chapman-Novakofski
Extension Nutrition Specialist
University of Illinois
Urbana, Illinois
Internet address: kmc@uiuc.edu

L.S. Boeckner
Extension Nutrition Specialist
University of Nebraska,
Scottsbluff, Nebraska
Internet address: hnfm015@unlvm.unl.edu

R. Canton
Extension Evaluation Assistant
University of Illinois
Urbana, Illinois

C.D. Clark
Extension Evaluation Specialist
University of Illinois
Urbana, Illinois

K. Keim
Research Associate
Colorado State University
Fort Collins, Colorado
(formerly at University of Idaho)

P. Britten
Extension Nutrition Specialist
University of Hawaii
Honolulu, Hawaii

J. McClelland
Extension Nutrition Specialist
North Carolina State University
Raleigh, North Carolina


Impact evaluation is becoming more important as accountability becomes more crucial and financial support becomes less available. Indicators of impact are expected in experimental situations, but are also becoming prominent in selected community and health education programs (American Dietetic Association, 1995; American Public Health Association, 1991; St. Pierre, 1982). Although many people intuitively recognize the value of documenting program impact, the literature is scarce outside the realm of large intervention education programs.

Extension nutrition educators have long worked to develop national nutrition education impact indicators that would facilitate collection of national Cooperative Extension data (Brink, 1986; Voichick, 1991). In 1993, evaluation questionnaires, reflecting knowledge and behavior change in a pre -, post- and post-post method were distributed nationally by the authors at the meeting of the Society for Nutrition Education and later by a national program leader to a general list of specialists in each state and territory for voluntary field testing.

The results of these pilot studies are reported elsewhere (Chapman, Clark, Boeckner, McClelland, Britten, and Keim, 1995; McClelland, Keim, Britten, et al., 1995). However, the evaluation of the Impact Indicators Project (IIP) generated an additional question: What do a) Extension field staff and b) Extension nutrition specialists think about the importance and use of evaluation measures? The Impact Indicators Project included the development of the indicators for nutrition education programs and accompanying instruments to assess knowledge and behavior change. Field testing of the evaluation instruments generated the additional question.

This paper reports the results of telephone surveys with Extension nutrition specialists and field staff who participated in IIP. Although two separate surveys and analyses were used, both examined the use of nationally developed evaluation instruments focused on nutrition impact indicators and the perceived importance of the program evaluation process in general.

Data Collection and Analysis

Field Staff Surveys

The basic questions asked during interviews evolved as a consensus of the authors in which key issues pertaining to the evaluation process relative to field staff were identified. The questions were refined, pilot-tested with students for clarity, and formatted using telephone survey techniques (Dillman, 1978). Open-ended questions followed by probes were used to off-set the tendency to provide socially desirable answers about the utility of program evaluation.

Since the actual number of field staff who participated in the pilot study was small (n=26) compared to the potential participation across the country (n>3000), analysis of responses consisted primarily of identifying response trends. The strength of trends was described in relative terms rather than absolute terms: Most = more than half; Many = less than half but more than one-third; Some = between one-fourth and one-third; Few = less than one-fourth.

Specialist Surveys

The telephone survey questions for the nutrition specialists were developed and reviewed by the authors. Initial contact was made by telephone or electronic mail to each state to determine the most appropriate specialist to contact, their willingness to participate, and establish a time and date for the interview call. Confidentiality of telephone interviews was maintained.

Interview data and notes were sent to one researcher for compilation. Analysis of survey responses from the specialists differed from the method used for the field staff responses since all but three states were represented by specialists. Quantitative results, where appropriate, as well as qualitative responses are provided by the specialist survey responses. Review of qualitative data from the specialist interviews was completed by two research team members. Institutional Review Board approval for both surveys was obtained.

Results and Discussion

Interviews were completed with 26 of the 28 field staff members from the 12 states that had participated in the pilot study, and with 53 Nutrition Specialists from 47 states and two territories.

Field Staff Survey

The 26 field staff who participated in IIP were a small fraction of the more than 3,000 field staff working in the Cooperative Extension System. Most field staff volunteered to participate in IIP after learning of the program from their state specialists. Many responded that their participation was the result of "requests" or direct instruction from their state specialist. A few felt the tools complemented their previously planned educational activities.

All but one of the responding field staff perceived many benefits in using the nutrition impact tools. Most field staff felt that feedback on their programs and the opportunity to use the evaluation methodology contained in the pilot were important benefits. Some felt this information was valuable for future program planning. A few field staff felt a major benefit was the access to a standard methodology with which to measure program performance that would increase the internal validity of their evaluation results. A few field staff also felt that pre-testing provided bench marking that was previously unavailable because post-test only evaluations were typically used.

Field staff respondents thought these tools helped verify the value of programming efforts by demonstrating their impact. This demonstration of impact was perceived as instrumental in securing new or continued program funding. A few felt the pre- testing procedure created program awareness and increased the overall quality of the educational activity. The pre-test was seen as promoting higher quality questions and discussion since participants were compelled to think briefly about the topic material in advance of the program.

Possible barriers to using evaluation tools included perceptions that: (a) clientele were generally resistive to completing evaluation measures; (b) demographic information was difficult to obtain and offensive to their clientele; (c) written or "pen and paper" types of evaluations were difficult because of literacy and time constraints. Most field staff wanted a short, general, effective questionnaire specific for their program and flexible enough for them to change according to their programming changes and their diverse audiences.

Field staff felt they were personally responsible for communicating results to their local funders. They apparently do not routinely use evaluation tools to document impact and thus provide a basis for communication. Less than one-fourth of this small sample reported using a written evaluation measure almost all the time. More than half responded that the last time they could have used a written evaluation tool they did not.

Specialist Survey

Forty-seven of the 53 specialists who were interviewed recalled seeing the IIP questionnaires. Seventeen specialists from 16 states (36%) participated in the field testing; 27 did not participate, and three were not sure about their state's participation. This participation rate differs from that reported by field staff. It is likely that specialists who considered themselves field study participants had sent IIP questionnaires to field staff who subsequently did not use the tools and failed to report back to the state specialist.

Willingness to field test IIP questionnaires was associated with years employed in Extension. In this survey, 52% of the specialists who had worked for Extension ten years or less field tested the instruments compared to only 18% of specialists who had worked in Extension for more than ten years.

Responses to questions related to general program evaluation showed that nearly half (48%) estimate they spend from 11-to-25% of their time on program evaluation. Program evaluation processes were described to include the determination of program objectives, development of evaluation instruments, collection and analysis of data and writing evaluation reports. Twenty-eight of the 53 respondents (53%) had evaluation assistance available to them from their universities. Of these 28 respondents, 23 received their assistance from an Extension evaluation specialist. Conversely, 25 of 53 respondents did not have evaluation assistance available to them. If impact evaluation continues to be important and needed, subject matter specialists who also have evaluation expertise may become more critical in the future.

Specialists who participated in the IIP field testing (n=17) perceived the following benefits: (a) personal/professional responsibility (9 respondents), and (b) need (7 respondents). Specialists felt the instruments offered a step toward reporting needed national impact data. Comments indicative of a sense of personal or professional responsibility were similar to: "... could see a great benefit if we can report national data." Others indicated their participation in IIP field testing was based on increasing requests by state and federal officials for documentation of impact.

Questions about the general nature of program evaluation asked of all specialists who were interviewed revealed the following major categories for the role of evaluation: (a) to show accountability to stakeholders and administrators (24 responses), (b) for program management and development (23 responses), and (c) to show effect on consumers and clientele (12 responses).

When questioned about their uses of evaluation methods, specialists often mentioned using end-of-meeting evaluations. Assembling program management and development information has traditionally been met through end-of-meeting evaluations that determine suitability of the learning environment, acceptability of the teacher, delivery of appropriate topics, ideas for improvement of current programs, and suggestions for future programs. Although these are process types of evaluation, they fail to address whether program participants have made changes as a result of the program. When impact evaluations to show program effectiveness were mentioned by specialists, they were program specific, i.e., Expanded Food and Nutrition Education Program or multi-unit in-depth courses.

Specialists who did not participate in pilot testing the questionnaires (n=27) were asked their reasons for non- participation. Nearly half (12) indicated they experienced time or staffing constraints that prevented them from undertaking the project. Nearly one-fourth of the respondents (7) indicated handling evaluations through the mail was too cumbersome. Some specialists (6) also perceived a reluctance on the part of Extension field staff to facilitate the process.

All of the interviewed extension specialists revealed barriers to general program evaluation. Twenty-three of 53 specialists (43%) indicated experiencing time and dollar shortages. Acceptance of the evaluation process and cooperation by the county faculty (21) and lack of administrative support (8) were additional issues. Nearly one-third (17) reported they lacked expertise for conducting effective impact evaluations.

Summary and Recommendations

Although many field staff could relate evaluation results to improving their teaching and programming, others felt it was a waste of time and effort. The finding that staff found few rewards for conducting evaluations suggests that Extension's rhetoric about the need and desirability of measuring program outcomes is not matched by rewards. Field staff also seemed concerned that system-wide evaluation measures need to be sensitive to differences in both clientele and programs.

Extension nutrition specialists considered program evaluation to be needed and important, especially for accountability purposes. Perceived lack of time and financial resources were common concerns.

As the Extension system moves increasingly toward establishing accountability of programs, effective program evaluations become critical. The following are points to consider for the successful establishment of effective program evaluation within the Extension system:

  • Strong administrative support and leadership will be critical at all levels (local, state, federal) for effective program evaluation to occur. Extension leaders need to establish with Extension workers that well-planned evaluations are expected, supported, and rewarded.

  • The availability of program evaluation personnel within the Extension system is desirable, as is in-depth in-service education on program evaluation techniques. This would enhance the knowledge base for designing effective evaluation protocols and address the discomfort that Extension personnel may feel regarding their evaluation expertise.

  • Evaluation tools must be user friendly and audience sensitive. A variety of reading levels and culture-specific tools would seem warranted.

The implication of this study can be carried over to any discipline or programmatic unit that is attempting to show impact of its local programming. Since capacity building at local and state levels is important if Extension is to show impact, other disciplines or program units may need to assess their personnel for some of these same concerns and barriers.

References

American Dietetic Association. (1995). Nutrition intervention and patient outcomes: A self-Study manual. Columbus, OH: Ross Laboratories.

American Public Health Association. (1991). Healthy communities 2000 model standards: Guidelines for community attainment of the Year 2000 National Health Objectives (3rd ed.). Washington, DC: Author.

Brink, M.S. (1986). FY88-91 Extension's food, nutrition and health program emphases and outcome indicators. Washington, DC: Home Economics and Human Nutrition Extension Service-USDA.

Chapman, K., Clark, C., Boeckner, L., McClelland, J., Britten, P., & Keim, K. (1995). Multistate impact indicators project. Proceedings from the Society for Nutrition Education Annual Meeting, 20, 45.

Dillman, D.A. (1978). Mail and telephone surveys: the total design method. New York: John Wiley and Sons, Inc.

McClelland, J., Keim, K., Britten, P., Boeckner, L., Chapman, K., Clark, C., & Mustian, R. (1995). Measuring dietary fat knowledge and behavior using impact indicators. Proceedings from the Society for Nutrition Education Annual Meeting, 20, 46.

St. Pierre, R.G. (1982). Specifying outcomes in nutrition education evaluation. Journal of Nutrition Education, 14(2), 49- 51.

Summers, J.C., Miller, R.W., Young, R.E., & Carter, C.E. (1981, July). Program evaluation in extension: A comprehensive study of methods, practices and procedures (Executive Summary). Morganton, WV: West Virginia University Cooperative Extension Service, Office of Research and Development.

Voichick, J. (1991). Impact indicators project report. Madison WI: Extension Service-USDA.