October 1995 // Volume 33 // Number 5 // Feature Articles // 5FEA4
Evaluating Extension Program Effectiveness: Food Safety Education in Texas
Abstract
This article describes how program outcomes can be assessed and used to determine Extension program effectiveness, using on evaluative study as an example. The purpose of the study was to determine the extent to which program participants changed their food handling or food preservation behaviors after attending Extension programs. Two hundred three telephone interviews with program participants were completed. Results indicated that programs were effective in increasing the adoption of safe food handling and safe food preserving behaviors. Evaluative data such as this is critical for Extension personnel, as they prepare accountability reports, funding requests, and program plans.
Budget constraints have caused federal and state legislators to carefully examine funding requests in terms of the effects or outcomes of particular programs. Those programs most likely to be funded are ones which give the greatest return for the dollars spent. Thus, in order to receive funding in the future, Extension personnel will need to continue developing and improving strategies to inform legislators about the value of their programming efforts.
Traditionally, Extension accomplishments have been illustrated by providing key decision makers with data describing specific programs and the number, ethnicity, and gender of citizens served. Frequently however, data are not readily available to evaluate program outcomes. What behaviors did participants adopt as a result of participating in these programs? What are the implications for program impact in terms of dollars saved/earned, health benefits and/or social change? Can the program be justified in terms of a benefit/cost model? These are questions that need to be answered, to keep Extension programs viable and relevant. To answer these questions, the program planning process needs to include evaluation strategies designed to measure outcomes and assess effectiveness.
Planning for Evaluation
The three basic steps in program planning and evaluation are (a) developing a set of clearly defined objectives, (b) determining the procedures/methods to accomplish the objectives, and (c) identifying evaluation techniques to measure the extent to which program objectives are met by the methods and materials used in program delivery (Issac & Michael, 1981).
The objectives represent the "what" of the plan and should be stated in clear and measurable terms. Objectives should state what the program participants will be able to do after participating in the program.
The second step of the process involves the "how." Based on the targeted audience and particular objectives, the program planner has to determine which methods and materials will be used to accomplish the objectives.
The final step is to develop evaluation strategies for determining if the objectives have been achieved. Evaluation or outcome data provide information that can be used to explain the "why" of future funding requests and help justify current programming efforts.
This article focuses on the last step, evaluation. An evaluation study to assess the effectiveness of food safety programming in Texas is used as an example. The purpose of the study was to determine the extent to which program participants changed their food handling or food preservation behaviors after attending Extension programs presented during 1992 and 1993. This evaluation study included the following steps:
- identifying the population,
- developing the interview forms,
- collecting and analyzing the data, and
- communicating the findings.
Identifying the Population
The first step of the evaluation study was to identify the population (i.e., who had participated in these programs) and select the sample. Because each county's Plan of Work is accessible through a computer network, a key word search (food and safety and food preservation) was conducted. District Extension directors were also asked to identify additional agents not revealed by the key word search, who had done food safety programming. The 65 agents identified were asked to submit their attendance lists for food safety programs. Food safety programs were defined as those utilizing a group meeting or result demonstration format to teach safe food handling or food preservation practices to adults. Three agents in 1992 and nine agents in 1993 presented the food handling programs. Two agents in 1992 and seven agents in 1993 presented the food preservation programs.
Developing the Survey Instruments
Extension programming materials were reviewed to identify key food handling and food preservation practices (Gentry-Van Laanen, 1991). These program materials included lesson objectives and concepts, suggestions for teaching, activity and fact sheets, transparency masters, and supplemental leader information.
The food handling interview form contained 16 questions relating to how respondents handled food before participating in the program. The same questions were repeated to identify food handling behaviors after participating in the program. Respondents had to recall whether they performed the behavior using a yes/no format. If they answered yes, they were asked how often they performed the behavior using a five-category response format of more than 90% of the time; between 75% and 90% of the time; between 50% and 75% of the time; between 25% and 50% of the time; or less than 25% of the time. The food preservation interview included eight questions relating to which kinds of foods they canned (i.e., high acid, low acid, pickles, jellies), and how they processed these foods before and after participating in programs. In addition, both interview forms included a section on demographics.
Asking before and after questions at the same time is a potential limitation of the study. However, since no baseline data were collected prior to the programs, collecting both pre- and post-behaviors after the programs was a necessity. While the simultaneous collection of pre- and post-data might lead some respondents to over-represent a positive experience by downplaying their behavior before the program, this tendency is also likely to some degree when assessment is done before programming takes place and the period between pre- and post-assessment is short. In the latter case, a favorable experience might lead respondents to over-represent their scores on the post-test. Reduction of the effects of this limitation requires direct observation of participants' behaviors, which was beyond the scope of this study. To avoid this limitation, program developers need to include pre-program evaluation instruments in the programming materials and stress the importance of collecting this pre-program data.
After the interview forms were reviewed by Extension specialists and agents who did food safety programming, the two forms were pretested. Based on data from the pretest, some modifications were made in wording and format.
Collecting the Data
The survey was conducted by telephone. Trained interviewers called participants over a 10 week period. Five attempts were made to reach each respondent. Program participants readily answered the questions and completed interviews that ranged from 15 to 20 minutes. From the available pool of 463 potential food handling respondents, phone numbers were randomly selected and called until 100 interviews were successfully completed. For the food handling participants, 132 calls were made to complete the 100 interviews for a 76% response rate. From the available pool of 268 potential food preservation respondents, phone numbers were randomly selected and called until 103 interviews were successfully completed. For these participants, 121 calls were made to complete 103 interviews for an 85% response rate.
Results and Discussion
The mean scores of the before and after food handling behaviors were compared and t-tests were used to determine if differences were significant. For example, food handling program participants reported practicing safe food handling behaviors a higher percentage of the time after attending the programs (Table 1). Specific food handling practices showing statistically significant changes included (a) thawing frozen foods in the refrigerator, (b) using appliance thermometers, (c) keeping food preparation areas clean, (d) washing hands before handling food, (e) washing hands after handling raw meat and before handling other food, and (f) refrigerating perishable foods promptly. In addition, almost one-half of the participants identified publications or programs provided by the Extension Service as their main source of food safety information.
Table 1
Comparison of Percentage of Participants who Practiced Safe Food Handling Behaviors Before and After Participating in Extension Programs (n = 100) | ||||
---|---|---|---|---|
Behavior | % Before | % After | % Scores* | % Difference |
Thawing frozen food in refrigerator | 41 | 76 | 6.579 | +35 |
Using appliance thermometer | 40 | 54 | -3.717 | +14 |
Keeping food prep areas clean more than 90% of the time | 79 | 91 | -3.146 | +12 |
Washing hands before handling raw meat more than 90% of the time | 76 | 93 | -3.764 | +17 |
Washing hands after handling raw meat more than 90% of the time | 82 | 97 | -3.639 | +15 |
Refrigerating perishable foods promptly | 64 | 85 | 2.500 | +21 |
Note. * = statistically significant at p < .05. |
Comparisons of the mean scores of before and after food preservation practices were not as dramatic as the before and after comparisons of food handling behaviors, since the majority of canners were following recommended practices for preserving food safely before they attended the recent programs. This is not surprising considering that 85% of the participants were experienced canners having canned for nine or more years and were already following recommendations from Extension publications and programs. These canners, with the exception of one, were using the recommended processing techniques for high-acid and low-acid foods. However, a lower percentage of canners were processing jellies and pickles (a practice recommended by the U.S. Department of Agriculture, 1988) before they attended the program. The percentage who processed their jellies using a boiling water canner increased from 32% before the program to 49% after the program, while the percentage using a boiling-water canner to process pickles increased from 64% before the program to 77% after the program. The number using paraffin to seal jars (a technique no longer recommended) decreased from 35% to 11%.
Communicating Evaluation Findings
Having completed the program evaluation, the next task is to identify who needs to know about the findings. In other words who are the stakeholders (Morris, Fitz-Gibbon & Freeman, 1987)? Stakeholders for the findings of this study included the state director and administrative staff, as well as other Extension personnel (i.e., nutrition specialists, members of the state food safety initiative team, district directors, and participating county agents).
These individuals are expected to use this information in a variety of ways for secondary audiences such as legislators. For example, the state administrative staff members will use data from this report to prepare accountability reports mandated by legislators. Likewise county agents will use this information for local planning and reporting to local officials, while specialists will use the data for justifying and prioritizing program support, including materials development for agents.
Evaluation findings can be disseminated in a variety of formats. In the current case, a technical report including information about the study's procedures and findings was prepared. Print and news releases were prepared containing information about the targeted programs and results.
Information about the study was also included in a feature article on food safety in the Fall, 1994, issue of Extension Today, a publication of the Texas Agricultural Extension Service, whose circulation includes key state and national legislators and clientele (Chenault, 1994). Finally, the authors submitted articles based on the findings to professional journals to reach colleagues in other states.
Summary and Implications
A carefully designed program evaluation is a critical part of the program planning process. The program evaluation needs to include procedures for identifying program participants, developing survey instruments based on program objectives, collecting data, and communicating the findings. Outcome evaluative data can provide information to help determine how effective programming efforts have been and to help tell the Extension story to a variety of different audiences. Procedural information from the current study will help Extension personnel design future studies to meet increasing accountability requirements by legislators, funders, and the public.
With continued concern about the incidence, complications and costs of foodborne illness, especially to certain segments of the population, educational efforts that show adoption of safe food handling behaviors should be continued. It was beyond the scope of this study to attach a concrete dollar amount to the impact of food safety programming in Texas. However, the implication is that if more people handle food safely, there is likely to be an economic benefit in terms of increased productivity of workers due to fewer sick days, reduced medical and hospital costs, and fewer deaths.
Future evaluation efforts will target food handling programs for commercial and institutional food service employees in an effort to determine Texas Extension's role in training this important audience, particularly in rural areas of the state.
References
Chenault, E. (1994, Fall). Food safety programs focused on prevention. Extension Today!, 13(3), p. 3.
Issac, S., & Michael, W. B. (1981). Handbook in research and evaluation (2nd ed.). San Diego: Edits.
Gentry-Van Laanen, P. (1991). The Texas food safe guide for Extension agents. College Station: Texas Agricultural Extension Service.
U.S. Department of Agriculture. (1988). Complete guide to home canning (Extension Service Agriculture Information Bulletin No. 539). Washington, DC: Author.
Morris, L. L., Fitz-Gibbon, C. T., & Freeman, M. E. (1987). How to communicate evaluation findings. Newbury Park, CA: Sage.
Author Notes
J. Nies was an Extension specialist with the Texas Agricultural Extension Service when the data for this study were collected.