Spring 1985 // Volume 23 // Number 1 // Feature Articles // 1FEA3

Previous Article Issue Contents Previous Article

Mixing Apples and Oranges

Abstract
Results can be aggregated across individualized programs.


Christine J. Thompson
Evaluation Specialist, Family Living Specialist
Michigan State University - Lansing


Have you ever been faced with the task of adding up results data from a number of similar-but slightly different-programs in your state or county? What about the job of coming up with a concise report emphasizing the combined impacts of such programs? The temptation is to throw up your hands and insist that the task is impossible. It's like adding up apples and oranges, right? Not quite . . . it can be done. The key to success is developing and using result indicators.

Result Indicators

A result indicator is a tangible, accepted sign that represents a major result of Extension programming.

A well thought-out result indicator defines an observable indication that the evaluator takes as a sign that a change in knowledge, skills, attitudes, and/or practices has occurred.1

Result indicators can be classified into "micro" and "macro" levels. Micro indicators are programspecific and are generally outcome statements of the goals and objectives of a given program. For example, "developing a spending plan" would be one indicator of behavior change. Macro indicators are broader than micro indicators and reflect major accomplishments of Extension programs. For example, "increased financial security" is a macro indicator and is measured by aggregating many micro indicators, including developing a spending plan.2 Both types of indicators are important in documenting change in participants of Extension programs and in aggregating data for multiple counties conducting similar programs.

Micro indicators can most effectively be determined by programmers working together with evaluators early in the developing stages of the programs. The task of identifying measurable expected outcomes can help build a program that reaches conceptualized goals as well as provide criteria to measure impacts. Identifying micro indicators lays the groundwork for deciding on macro indicators-those broader signs that are useful for accountability reporting.

A set of result indicators for a given program area can help: reduce effort and duplication in the data collection process, provide a base of comparison for interpreting results data, aid aggregation of data on a statewide or national level, and help summarize and communicate results information to non-Extension audlences.3

This article discusses the identification and use of result indicators in the area of stress management within Michigan Extension's Family Living Education program.

Programming Model

Programming in the area of stress management has been increasing rapidly in Michigan in the past few years. A variety of approaches with respect to content and delivery were used to reach over 7,000 people in 54 counties in 1982-83. As programming in stress management represented a growing segment of the total programs in human development, it became clear that developing evaluation criteria and methods to produce aggregable results was imperative.

In this effort, the evaluation specialist in Family Living Education worked closely with specialists in human development who were developing teaching materials for agent use. A major goal was to work together to identify a set of result indicators for the major programming thrusts within the area of stress management. A second goal was to produce evaluation instruments that were program-specific, yet could be adapted for multiple-county use. Also, the results had to be easily aggregable for state reporting.

With these goals in the forefront, program-specifi result indicators were developed and classified according to the level of outcome representedknowledge, attitudes, skills and/or aspirations (KASA) change; practice change; and/or end-results changes.4 Secondly, end-of-session and delayed follow-up Instruments (both mail and telephone) were developed including one question on each Instrument to be completed by agents by "plugging In" appropriate indicators from the lists provided.

An example should help clarify this "plug-in" strategy. The question on the end-of-session forms to be completed by the agent/programmer read:

As a direct result of this program, has your understanding increased in any of the following areas?
Yes, to a great extent ( )
Yes, somewhat ( )
No, not really ( )
Items listed will be chosen by agent using list of KASA result indicators.)

The agent completed the question by choosing 5 to 8 indicators from a "menu" of over 30 that represented expected outcomes of the program. Indicators included, but weren't limited to: ability to identify personal stress symptoms, recognizing that I have control over how I react to stressors or pressures, and recognizing that positive as well as negative changes in my life can be stressful.

A similar question was included in the follow-up form: "As a result of this program, have you..." To complete the question, the agent referred to a list of indicators representing practice and end-results changes. Included on the list were: improved my responses to a stressful situation, developed a personal plan of action in response to a stressful situation, used safety valves or ways to reduce stress that work for me, and improved my relationship with my spouse, family, or friends.

The instruments developed to document impacts of stress programming, as described above, were piloted in 3 Michigan counties during 1983. Each county implemented the instruments somewhat differently. One Extension home economist, who was working with an unemployed audience, decided to use only an end-of-session questionnaire. Another agent mailed the end-of-session form out about three months after the workshop, followed up six months after the program with the telephone survey of a random sample of participants. The third county used the instruments as modeled.

Results

Although there was some digression from the model, results were aggregable in the three counties. For example, in all three counties, participants were asked whether understanding had increased in the following areas:

Indicator % indicating
increased
understanding
(3-county average)

Ability to identify my personal stress symptoms 86%
Recognizing that I have control over how I react to stessors or pressures 79
Recognizing that positive as well as negative changes can be stressful 85
Recognizing that underload as well as overload can be stressful 69

The telephone follow-up survey, conducted in two of the three counties, documented practice changes. Of the indicators agents chose to "plug-in" to the questionnaire, four were common to both counties and can be aggregated as follows:

Indicator % of sample
made changes
(2-county average)

Felt less worried or pressured about a previously stressful situation 65%
Developed strategies to cope with stress 81
Better able to recognize symptoms of stress in children 87
Better able to manage work-related stress 62

Although some indicators at both the KASA change and practice change levels were unique to one county, enough were common to create meaningful macro indicators based on the common micro indicators. For example, using the 4 specific KASA indicators, taken together, we could report that 80% of the participants in stress programming were "better able to understand and identify stress in self and others." The four practice change indicators taken together indicated that three-fourths are "better able to manage stress in the home and work environment."

In addition to facilitating aggregation across counties, common indicators provided a basis for comparison of program impacts among different audiences. For example, a greater percentage of the unemployed audience reported increased realization that "underload as well as overload can be stressful" than did the mixed audiences in the other two counties. Also, common indicators, when used in multiple sites or counties, can help establish benchmark data or standards over time by which subsequent programs can be measured.

Summary

Extension is faced with a paradoxical dilemma in the 1980's. Those of us in the role of evaluators in Extension are aware of the need to develop common result indicators and report on these indicators to address accountability issues. However, Extension prides itself on recognizing individual differences at the grass-roots level. Our programs, as well as accountability, will be strengthened if we can come to terms with the fact that our commonalities are greater than our differences within Extension.

The evaluation strategy for the stress programming effort in Michigan recognizes that although audiences and counties are different, common needs and program objectives to meet these needs can be agreed on. Common indicators have made it possible to aggregate data across counties, compare results of different audience types, establish benchmark data, and, at the same time, meet the evaluation needs of individual agents. These indicators have been used to substantiate program results and communicate accomplishments to both Extension and non-Extension audiences.

Can we add up apples and oranges? You bet we can!

Footnotes

  1. E. Elliott, P. Boyle, and B. L. Ralston, Cooperative Evaluation Project in Home Economics (Madison: University of Wisconsin-Extension, 1977).
  2. Indicators and Levels of Change in Consumer Competence (East Lansing: Michigan State University, Cooperative Extension Service, Family Living Education, 1979).
  3. Sara Steele, Program Result Indicators (Paper presented at the Workshop for Extension Home Economics Specialists, Home Economics Evaluation Project, Madison, Wisconsin, 1976).
  4. Claude F. Bennett, Analyzing Impacts of Extension Programs (Washington, D.C.: USDA, Cooperative Extension Service, 1976).