June 1998 // Volume 36 // Number 3 // Research in Brief // 3RIB5

Previous Article Issue Contents Previous Article

A Model for Integrating Program Development and Evaluation

Abstract
We present the communication model of Gillespie and Yarbrough and explain how it served as the framework for combining program development and formative evaluation of an osteoporosis prevention program for working mothers. The model includes inputs, an educational intervention, and outcomes. Combining receiver, situational, and educator inputs produced a more viable pilot program that was tested with the target audience at two types of sites convenient for working mothers. We collected both quantitative and qualitative outcome data, allowing a rigorous examination of whether or not program components were successful, and why. Extension educators should consider using the model as an effective way to combine program development and evaluation.


J. Lynne Brown
Associate Professor, Food Science and
Human Nutrition Specialist
Internet address: f9a@psu.edu

Nancy Ellen Kiernan
Program Evaluation Specialist
Internet address: nekiernan@psu.edu

The Pennsylvania State University
College of Agricultural Sciences
State College, Pennsylvania


Integrating evaluation with program development is critical to producing educational programs that have demonstrable impact. Scriven was the first to define two types of educational program evaluation-formative and summative (1967). Recently, Patton (1994) outlined their sequential nature: first, formative data are collected and used to prepare for the summative evaluation; then, a summative evaluation is conducted to provide data for external accountability.

Many Extension educators recognize this sequence but may place more importance on the second phase due to the need for impact data to address accountability (Voichick, 1991). However, Patton and others emphasize that evaluation should be an integral part of the program development process and, therefore, place equal or greater weight on the first phase, formative evaluation. According to Patton (1994), a formative evaluation should provide feedback on the original program and improve program implementation, while a summative evaluation should determine if the desired outcomes are achieved and can be attributed to the revised program.

Chambers (1994) argues it is not the timing, but the use of evaluation data that distinguishes formative from summative. He emphasizes that formative evaluation provides data with which to modify the initial intervention and its delivery so that the final intervention is more effective as revealed by the summative evaluation. Scheirer (1994) recommends using formative evaluation in a pilot situation to collect information on the feasibility of activities and their acceptance by recipients, suggesting qualitative methods such as interviews, focus groups, and observations to gather these data. In sum, these researchers suggest that formative evaluation should examine the effect of the program, the process of delivery, and the reactions of participants in the program.

When we planned the development and formative evaluation of an Extension nutrition education program, we had difficulty finding examples in the literature applicable to our situation. Reports of formative evaluations designed for large community- based nutrition interventions (McGraw et al. 1989; Potter et al. 1990; Jacobs et al. 1990; Finnegan et al. 1992) are available, but because of their complexity, the evaluation methods were not appropriate for our more focused, short-term educational program. Other formative evaluations have been limited to the development of materials (Tessmer, 1993).

Some researchers suggest using focus groups as the formative evaluation method for planning a short-term nutrition intervention (Crockett, Heller, Merkel, & Peterson, 1990; Iszler et al. 1995). Although a formative evaluation had been reported of a short-term program similar to the one we wished to design (Crockett, Heller, Skauge & Merkel, 1992), the report failed to disclose any data or data collection method that would have provided insights on how to improve the materials for the summative evaluation. These reports also did not provide a comprehensive model that combined program development and evaluation steps.

To guide our two-pronged objective, we turned to a communication model proposed by Gillespie and Yarbrough (1984) which had been used to plan, implement and evaluate several short -term nutrition interventions (Gillespie, Yarbrough, & Roderuck, 1983; Mayfield & Gillespie 1984). In this paper we will outline this model, show how we applied it, and, based on our experience, explain what each component can contribute to the development of a program and its formative evaluation.

The Communication Model

Nutrition education is a form of communication between educator and target audience. To address the planning, implementation, and evaluation of a nutrition education program, the model contains three sequential components. The first is the inputs, that is, information from the target audience or the receiver of the communication, from the communicator or educator, and from persons in the location or situation where the program would be offered. The second component is the communication or education process itself, that is, program delivery with measures of the extent of receiver attention, comprehension, and interaction. The third component is the outcomes and receivers' acceptance or rejection of the message.

In program development, first the inputs are collected and considered. Receiver inputs include measures of participants' initial skills, attitudes, beliefs, and habits. Educator inputs include consideration and choice of communication channels (such as interpersonal or mass media), the source or sender of the message, and the content and format of the message. The situational factors to consider include time and place of delivery, repetitiveness of message, and whether the message is one way or reciprocal. All inputs can influence the responses to the program. Next, an educational program is delivered with some level of interaction, and participants' attention to and comprehension of different aspects of the program are measured. The final step is outcomes measurement. Acceptance or rejection of the program is determined at the cognitive (for example, knowledge), affective (for example, attitude), and behavioral levels, and this information guides program modification, closing the feedback loop. The model integrates program development and formative evaluation.

Application of the Model

We applied this model to develop a learn-at-home program for working women on the prevention of osteoporosis. Table 1 shows how we collected data for each component of the model.

Table 1
Osteoporosis Lessons Development and Evaluation Model

* Inputs from Receivers (target audience: working mothers, ages 21-45):
* Telephone survey to assess working women's interest in nutrition information.
* In-person interviews with members of target audience.
* Pre-test knowledge, attitudes, and behavior (KAB) instrument to provide target audience's demographics, initial knowledge, attitudes, and behaviors.
* Inputs about the Situation (sites for delivering program):
* Telephone survey of work site personnel managers.
* Extension agent assessment of experience with child care sites.
* Inputs from Educators:(nutrition specialist, evaluation specialist and Extension agents)
* Choice of lesson structure, content, repetitiveness, message direction, delivery method, source, evaluation plan, instruments, and delivery sites.
* Educational Program Delivery:
* Pre-test, post-test design with 8 week intervention where lessons were distributed one at a time every 2 weeks. Formal interaction with participants was not part of the pilot design.
* Response sheet inserted in each lesson to assess attention (extent information was read and useful) and comprehension (of two key points).
* Outcomes:
* KAB scale questionnaire and a food frequency questionnaire (FFQ) provided comparison of pre/post quantitative data.
* Focus group discussion with participants: working mothers.
* Group discussion with participating Extension agents.

What the Model Contributed to Program Development and Evaluation

Receiver inputs: The telephone survey indicated young and middle-aged working women preferred to receive printed information about diet and disease. Interviews with 22 married working mothers provided information on food preparation practices, familiarity with food sources of calcium, and knowledge about risk factors for osteoporosis. These women did most of the cooking and shopping, purchased milk for their children but some did not drink it themselves, had limited familiarity with alternative food sources of calcium, and recognized the term osteoporosis but had no knowledge of the risk factors.

Situational inputs: We considered two sites for lesson delivery, work sites and child care sites. Agents who had worked with child care sites provided valuable advice on when and how to provide a program at these sites. Our telephone interviews with 71 work site contacts provided critical information about acceptable times for lesson delivery and management experience with and expectations of employee health programs delivered at the work site.

Educator inputs: We combined the above inputs with educator information about the disease process, risk factors, and national data on women's mean calcium intake and prevalence of lactose intolerance and used this information to produce a set of four learn-at-home lessons that featured four calcium rich foods that were alternatives to fluid milk. We developed an evaluation plan that included quantitative and qualitative outcome data. Extension agents helped to recruit sites, the appropriate contact person, and participants, and to modify the advertising process. We developed and established the internal consistency of the KAB instrument and pretested the FFQ.

Educational Program Delivery: Analysis of the response sheets informed us that participants found the reading level of the lessons acceptable and the information useful. But we also learned that they failed to understand a key concept repeated in each lesson. Apparently our explanations of the concept in the lessons were insufficient for full comprehension.

Outcomes: Analysis of the pre-test instruments provided demographic information, the knowledge, attitude, and behavior scores, and nutrient intake (our dependent variables) of initial registrants. The demographic data indicated we were not reaching as many of our target audience as desired. Comparison of pre- and post-test instrument data indicated the impact of exposure to the lessons on our dependent variables. There were positive changes in knowledge but no change in attitudes or behaviors. Only half of registrants completed the post-intervention instruments.

Focus groups, conducted with both completers and non- completers of the lesson series, provided insights to advertising, delivery process, lesson content and format, as well as suggestions for improvement. These sessions identified unanticipated changes made by Extension agents in lesson delivery and also provided reasons for the poor completion rate and critical information about the evaluation instruments. Specifically, participants felt that completing the KAB and FFQ immediately after the intervention was silly because they had not had time to adjust to the lesson information and change their food habits. Discussion with the Extension agents provided feedback on desirable characteristics of site contacts and problems in the advertising and delivery process, as well as comments on lesson contents. Comparison of the data from the participant focus groups and agent discussions identified common problems which were the focus for program revisions.

Conclusions and Implications for Extension Educators

We used the data collected through the model to revise the program and plan the summative evaluation of the "new" improved program. The communication model provided a valuable framework for combining program development and formative evaluation. Its emphasis on inputs improved initial program design by focusing our attention on both receiver and situational characteristics. We used surveys and interviews to gather this information but Extension educators could use other methods depending on the target audience.

The model specifies conducting an educational intervention or program. Any level of a pilot program desired by the educator can fit the model. But offering a pilot program within the desired situational context and administering pilot instruments is critical to good formative evaluation. We disagree with Iszler et al. (1995), who suggested that merely exposing members of the target audience to program ideas in a focus group setting is sufficient formative evaluation.

By offering the pilot program within the situational context, we discovered (a) problems attracting the desired target audience, (b) unintended changes in program delivery, and (c) serious problems with evaluation instruments, delivery methods and materials that could only have been detected from experiencing a full pilot program. Scheirer (1994) called delivery of the program to other than the intended audience and alterations in program delivery (unknown to program developers) to suit site conditions type III errors. She suggested that evaluation of the process of program delivery is needed to detect these. We agree.

Outcome data will be enriched by using qualitative methods to secure participant feedback. These methods provided critical data that explained why certain things did not work. This type of data would have helped Crockett et al. (1992) understand the reasons for lack of program impact. Our qualitative outcome data was richer because participants experienced the full pilot program.

Combining quantitative and qualitative measures within the model framework led to a more rigorous examination of acceptance and impact of a pilot educational program. We recommend Extension educators consider this model when planning and developing a short term educational program.

References

Chambers, F. (1994). Removing confusion about formative and summative evaluation: Purpose versus time. Evaluation and Program Planning 17(1), 9 - 12.

Crockett, S. J., Heller, K. E., Merkel, J. M., & Peterson, J. M. (1990). Assessing beliefs of older rural Americans about nutrition education: Use of the focus group approach. Journal of the American Dietetic Association 90(4), 563 - 567.

Crockett, S. J., Heller, K. E., Skauge, L. H., & Merkel, J. M. (1992). Mailed-home nutrition education for rural seniors: A pilot study. Journal of Nutrition Education 24(6), 312 - 315.

Finnegan Jr., J. R., Rooney, B., Viswanath, K., Elmer, P., Graves, K., Baxter, J., Hertog, J., Mullis, R. M., & Potter, J. (1992). Process evaluation of a home-based program to reduce diet -related cancer risk: The WIN at Home series. Health Education Quarterly 19(2), 233 - 248.

Gillespie, A. H., & Yarbrough, P. (1984). A conceptual model for communicating nutrition. Journal of Nutrition Education 16(4),168 - 172.

Gillespie, A. H., Yarbrough, P., & Roderuck, C. E. (1983). Nutrition communication program: A direct mail approach. Journal of the American Dietetic Association 82(3), 254 - 259.

Iszler, J., Crockett, S., Lytle, L., Elmer, P., Finnegan, J., Leupker, R., & Laing, B. (1995). Formative evaluation for planning a nutrition intervention: Results from focus groups. Journal of Nutrition Education 27(3), 127 - 132.

Jacobs Jr., D. R., Luepker, R. V., Mittlemark, M. B., Folsom, A. R., Pirie, P. L., Mascioli, S. R., Hannan, P. J., Pechacek, T. F., Bracht, N. F., Carlaw, R. W., Kline, F. G., & Blackburn, H. (1986). Community - wide prevention strategies: Evaluation design of the Minnesota Heart Health Program. Journal of Chronic Disease 39 (10), 775 - 788.

Mayfield, B. J., & Gillespie, A. H. (1984). A direct-mail nutrition in-service program for county agents. Journal of Nutrition Education 16(3), 119 - 122.

McGraw, S. A., McKinlay, S. M., McClements, L., Lasater, T. M., Assaf, A., & Carelton, R. A. (1989). Methods in program evaluation: The process evaluation system of the Pawtucket Heart Health Program. Evaluation Review 13(5), 459 - 482.

Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice 15(3), 311 - 319.

Potter, J. D., Graves, K. L., Finnegan, J. R., Mullis, R. M., Baxter, J. S., Crockett, S., Elmer, P. J., Gloeb, B. D., Hall, N. J., Hertog, J., Pirie, P., Richardson, S. L., Rooney, B., Slavin, J., Snyder, M. P., Splett, P., & Viswanath, K. (1990). The cancer and diet intervention project: A community based intervention to reduce nutrition-related risk of cancer. Health Education Research: Theory and Practice 5(4), 489 - 503.

Scheirer, M. A. (1994). Designing and using process evaluation. In J. S. Wholey, H. Hatry and K. Newcomer (Eds.), Handbook of Practical Program Evaluation (pp. 40-68). San Francisco : Josey Bass.

Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, and M. Scriven (Eds.), Perspectives in Curriculum Evaluation (pp. 39-83). Chicago: Rand McNally.

Tessmer, M. (1993). Planning and Conducting Formative Evaluations: Improving the Quality of Education and Training. London: Kogan Page.

Voichick, J. (1991). Impact indicators project report. Madison WI: Extension Service - USDA.