February 2003 // Volume 41 // Number 1 // Feature Articles // 1FEA6

Previous Article Issue Contents Previous Article

Program Development in a Political World--It's All About Impact!

Abstract
Impact is the difference we make in people's lives as a result of programs we conduct. To be effective, they must ultimately change people's attitudes or behavior, or benefit society in other ways. Measuring impact is part of evaluation, and it must be considered while a program is being developed, not after-the-fact. Most Extension staff are already experts in conducting outstanding programs but don't always take the final steps to evaluate, summarize, and market the impacts of such outstanding programs. This article presents a process for developing and promoting Extension educational programs that yield impact.


Keith G. Diem
Program Leader in Educational Design & Associate Professor
Rutgers Cooperative Extension, Rutgers University
New Brunswick, New Jersey
Internet Address: kdiem@aesop.rutgers.edu


Why Be Concerned with Impact?

Impact is about making an impression. In Extension, it is the positive difference we make in people's lives as a result of programs we conduct. These programs may include teaching, published curriculum, volunteer training, or applied research and may or may not involve the public directly while they are being delivered. Yet the results they achieve must ultimately change people's attitudes or behavior, or benefit society in other ways (Diem, 1997). Proving program impact is important to:

  • Justify the investment of time and effort, as well as the dedication of public and private funds.
  • Earn and build professional, organizational, and political credibility and support.
  • Satisfy the requirements of political bodies and funding agencies.
  • Yield tangible results that serve as a basis for scholarly publications, as well as awards and recognition.
  • Determine to what degree participants achieve intended results.

This article presents a process for developing and promoting Extension educational programs that yield impact.

To Begin, Start with the "End"

The program development model typically used by Cooperative Extension incorporates:

  • Needs assessment,
  • Development of program objectives based on the organization's mission to meet those needs,
  • Program planning and delivery,
  • Evaluation, and
  • Reporting the results.

Measuring impact is part of the evaluation component. Despite what is commonly believed and typically practiced, however, evaluation needs to be considered while a program is being developed, as well as during its delivery and after its completion. Therefore, the intended end results should be identified in the beginning of the program planning process.

Take Program Development One Step at a Time

Instead of allowing program development to become a complex, overwhelming exercise in futility, try to look at it as a series of simple, manageable steps. Here are some actions to take and points to consider when developing programs so that measuring impact becomes an integral part of the process.

1. Develop Goals for Programs Based On Need

Who are the audiences to be served? What are the outcomes to be sought? Are these outcomes attainable and measurable? Do potential programs fit local or clientele interests, state strategic plans, national initiatives, or federal performance goals?

2. Assess Resources Available to Conduct Programs

Funding surely influences objectives and affects program viability and success. Are you relying on public funds, grant monies, or user fees? How stable is the funding? Consider what funding agencies will require when you report results. Be sure to build evaluation needs and costs into your budget.

3. Determine Priorities

Consider time and staff available. If choices must be made, which programs are likely to have greater impact on more people? Which ones are likeliest to grow? Which might be turned over to volunteers or advisory groups to maintain? Which might provide the greatest positive media attention and other forms of recognition for you and the organization? Which are likely to generate the most scholarly publications? Which have the support of clientele and advisory groups?

Involve stakeholders--people who have an interest in program results--in planning and decision-making whenever possible. According to Wentling (1980, p. 2), "evaluation is an integral part of any decision-making process."

4. Determine Specific, Measurable Objectives for the Programs Selected

Confirm that your objectives are measurable and attainable. If you are unable to list your objectives in writing, you are probably not ready to plan or deliver the program. Determine how you will know if the objectives are met. Your goals identify your intentions, and general objectives state what you expect your program will accomplish. However, educational objectives are preferable--they specifically state what the program participant or target audience will do, learn, or gain as a result of the program. These objectives will reflect the many different levels of program outcomes that might be sought. Below is a useful model (Bennett, 1975) that depicts the range of outcomes that might be desired in program delivery:

              • End Results
            • Practice Change
          • KASA (Knowledge, Attitudes, Skills, Aspirations) Change
        • Reactions
      • People Involvement
    • Activities
  • Inputs

Note that true impact increases as you go up the hierarchy. The lower few levels are important precursors but are not evidence of impact. Unfortunately, the cost of seeking higher-level outcomes is that they are often more difficult to measure or require a longer time to do so. An additional challenge of proving "end results" is making a feasible connection between a program offered and the results realized. Program planners must be able to answer the question: "How do you know this program was responsible for these impacts?" Your claims must be believable, based on a logical model of evaluation and how the results were obtained from employing it.

Below are examples of Extension-related objectives for each of the levels of the hierarchy, starting from the bottom. Before you start conducting a program, analyze your listed objectives to make sure they can be accomplished and measured, and at what level they are likely to yield impact. If your objectives don't go beyond the "reactions" level, then your results are unlikely to either.

Inputs--time, funds, staff invested

  • A budget of $2,500 will be allocated.
  • Fifty staff hours will be dedicated.
  • A 2-year needs assessment study, surveying 384 citizens, will be conducted.

Activities--events, activities, programs, sessions offered

  • A 5-week sheep production course will be offered.
  • 4-H Summer Camp is scheduled for August 1-5.
  • A home-study course on financial management will be presented.
  • A subscription-based newsletter will be offered to interested commercial fishermen.

People Involvement--number of participants involved

  • Two hundred welfare recipients will be enrolled.
  • Twenty volunteers will be trained to assist with program delivery.
  • Two-thirds of Chamber of Commerce members will attend.
  • All elected officials from county government will participate.
  • Enrollment at 4-H Summer Camp will be doubled within 2 years.

Reactions--what participants thought of the program, its organization, its leader, etc.

  • Seventy-five percent of workshop attendees will rate the program as Very Good or Excellent.
  • The instructor will attain an average Teacher Instructor rating of at least 8 on a scale of 1-10.
  • Ninety percent of conference attendees will agree that they would recommend it to others.
  • Seventy-five percent of participants will be satisfied with meeting facilities, food service, and lodging.

KASA (Knowledge, Attitudes, Skills, Aspirations) Change

  • Two-thirds of farmers attending will learn how to apply herbicides properly.
  • Ninety percent of citizens participating in the voter education course will report being likelier to vote in the next election.
  • Seventy percent of sixth-grade students will be more interested in science careers.
  • Sixty percent of home gardeners will understand the value of composting lawn clippings.
  • All low-achieving students enrolled in the program will demonstrate at least two indicators of improved self-esteem by the end of the school year.

Practice Change--improved methods of action adopted

  • Ten grain producers will employ conservation tillage methods.
  • Fifty percent of program participants will follow guidelines of the USDA food pyramid in meal preparation.
  • Seventy-five percent of teenage smokers attending the workshop will quit smoking within 6 months.
  • Ninety percent of commercial fisherman attending will use special nets to avoid trapping sea turtles and marine mammals.

End Results--broader outcomes, effects, and benefits resulting from changes in practices

  • Non-point-source pollution will be reduced by 50%.
  • Fatal farm accidents will be reduced by one-third.
  • Twenty fewer families will be on public assistance.
  • Twenty-five percent of business owners participating will increase operating profits by at least 10% within one year.
  • Within 2 years, 75% of homeowners participating will reduce consumer credit interest payments by 10%.
  • Average science and math scores will improve by 15% by the end of the school year.
  • As a result of the neighborhood watch program, thefts and burglaries in this neighborhood will decrease by 30% within 1 year.
  • Teen pregnancy rates will be reduced by 10% within 5 years.

Of course, results are reported in direct relation to the original objectives. Therefore, writing objectives in advance is not just a bureaucratic exercise, but part of a program plan that makes determining the resulting program impacts much easier.

5. Conduct the Program According to Plans, Based on the Objectives Set

Most Extension faculty and staff are already experts in conducting outstanding programs. The important point to remember is not to stop program development when the program ends. Take the final steps to evaluate, measure, report, and market the impacts of such outstanding programs.

6. Measure Program Impacts Using Suitable Evaluation Methods and Tools

"Suitable" evaluation methods will vary depending on the type of program and its objectives, the audiences affected, the time frame, and to whom the results will be communicated. You must also determine where to obtain the information needed. It may not always be possible to ask program participants directly. For example, parents or teachers might be better sources of information about their children than the children themselves. Farmers might not want to divulge details about their farming practices, but such data might already be available from other sources, such as the Department of Agriculture. The number of citizens below the poverty level would be better obtained from U.S. Census data than from surveying local residents.

In summary, there are essentially three ways to evaluate impact on your clientele:

  • Ask them,
  • Test them, or
  • Observe them.

When you choose evaluation methods, keep in mind the purpose of your study, and match up the methods that help achieve that purpose. Then, conduct evaluation by the most careful, thorough, and systematic means possible (Diem, 2002). Here is an overview of basic methods and tools that can be used to measure impact.

Survey research (asking)

  • Written questionnaires
  • Program follow-up surveys and longitudinal studies
  • Interviews, testimonials, & case studies

Simple experimental designs (testing)

  • Pre-test, post-test
  • Post-test with control group comparison

Observations (observing)

  • Direct observation of program participants (by program leader or by objective observers/recorders)
  • Reviewing information from other sources, such as U.S. Census data or government reports

7. Report Findings to Interested Stakeholders

Stakeholders are the people who have an interest in your program and its impacts (Patton, 1978). They may be program participants and clientele, the media, elected officials, or funding agencies. Vary what and how you report your impacts based on your audience. Consider what they want or need to know.

You will often have to write different versions for different audiences. A scientific journal might want all the details of a research methodology employed, but many audiences want only a summary of the results or impacts. For these audiences, keep it simple. Also consider reading level, and avoid technical jargon and acronyms.

Choose Your Approach

Evaluation and reporting can be done using quantitative, qualitative, or a combination of methods.

Quantitative

A quantitative method uses "hard" data that can be clearly counted and measured, attempting to categorize and summarize results using numbers and labels. A quantitative approach to reporting program results might be compared to a news story: it tends to include only the facts of what happened and often summarizes data from a large group.

Qualitative

A qualitative method focuses more on the human experience, often using anecdotal evidence and testimonials. It aims to thoroughly describe a situation or explain reasons for a problem or circumstance. It is typically thorough and provides in-depth understanding of a situation or group of people but does not attempt to quantify results.

Examples include focus groups and case studies, which involve direct observation or interviews with single subjects or single small social units such as a family, club, school classroom, etc. A qualitative approach to reporting is similar to a human interest or feature story that talks about the personal impact of an event from the perspective of a few individuals.

Combination

Integrating both approaches allows a program evaluator to provide credible facts that explain the impact of a program, while adding a rich, human element that indicates how people were affected by the experience.

Write Impact Statements

Impact statements are concise, but meaningful overviews of program results. They go beyond explaining "What" or "How" to answer the questions "Who cares?" or "So what?" Impact really doesn't happen until at least the "KASA change" level and isn't as significant until the "practice change" and "end results" levels. To bolster the believability of your statements, consider succinctly including the source of your data so the reader doesn't have to "take your word for it." You might do this by leading off with a phrase such as "According to county records," "Based on pre-post survey data," or "A comparison of Census data from 1990 and 2000 indicates." Here are some examples adapted from actual, effective impact reports.

Impact Statement 1

As a result of Extension-led training, 800 farmers statewide have adopted sustainable agricultural practices, including integrated pest management, crop rotation for disease control, reduced herbicide rates for crop production, refined nutrient management practices, pre-side dress nitrogen testing, and selection of crops best adapted to soils and growing conditions. These practices have resulted in reduced purchased inputs, saving more than $400,000 in pesticide costs on 28,000 acres.

Impact Statement 2

In the past five years, seven Extension community economic development agents assisted more than 1,200 community leaders with local economic development. This assistance led to the creation of 10 industrial parks; the expansion, retention, and attraction of 325 businesses and 34 parks; and the creation or retention of 6,807 jobs. These projects involved the investment of $33 million in public infrastructure and $467 million in private sector capital investment in local communities.

Impact Statement 3

During a five-year period, 160 youth from an inner-city, high-risk housing project participated in an Extension-sponsored, daily, three-hour after-school program. Expected outcomes included reduced incidents of substance abuse; decreased behavioral problems in school; and an increase in discipline, respect, integrity, and responsibility through training and role modeling. To build grassroots ownership in the program, adults from the housing project were trained and hired as staff. Youth gained an average of 1.4 years in reading test scores and 1.5 years in math during the first year. Academic gains continued every year of the program. Ninety percent of the parents surveyed agreed that their children's behavior had improved as a direct result of participation in the program. Furthermore, 98% of the adults completed high school or obtained a G.E.D. certificate during the program.

Do Something with the Results!

According to Brinkerhoff, Brethower, Hluchyj, & Nowakowski (1983), "Evaluation is for making it work. If it works . . . notice and nurture. If it doesn't work . . . notice and change." In this case, the "it" is your Extension program. Here are some suggestions to employ this philosophy.

Use positive results to:

  • Market your program to prospective program participants.
  • Report to elected officials.
  • Incorporate into future funding requests.
  • Write scholarly publications.
  • Announce to the media via news releases.
  • Apply for awards.
  • Include in professional credentials (such as promotion/tenure applications).
  • Learn how your clientele were affected by their involvement in the program and how the program might be used or adapted to meet the needs of other audiences.
  • Feel satisfaction in accomplishing your goals, achieving results, and benefiting society and your profession.

Use less-than-positive results to:

  • Improve programs that have problems but also potential.
  • Involve stakeholders to help set priorities, make choices, and eliminate ineffective programs. A simple self-assessment tool can help facilitate that process (Diem, 2002).
  • Write scholarly publications, describing what was learned through the process you used. Often, as much can be learned from apparent "failures" as from obvious successes.

Where to from Here?

It is often stated that Cooperative Extension is the "best-kept secret," but this should not be considered a proud motto! Extension does so much good for so many people, but the benefits and outcomes to individual citizens and society as a whole often don't get communicated to decision-makers, including legislators, the media, and other opinion leaders, or to current or potential clientele.

This is not merely a marketing problem. Because funding is no longer "automatic," proving value and relevance is more important than ever. Take a little extra effort to integrate evaluation into program planning, and you'll end up with the evidence you need to demonstrate and communicate program impact to all who need to know. In the long run, the rewards will far outweigh the investment.

References

Bennett, C. (1975). Up the hierarchy. Journal of Extension, March/April, 7-12.

Brinkerhoff, R.O., Brethower, D.M., Hluchyj, T.,& Nowakowski, J.R. (1983). Program evaluation: a practitioner's guide for trainers and educators. Boston: Kluwer-Nijhoff Publishing.

Diem, K. (2002). Using research methods to evaluate your Extension program. Journal of Extension [On-line], 40(6). Available at: http://www.joe.org/joe/2002december/a1.shtml

Diem, K. (2002). Making program choices when resources are limited using a self-assessment tool with program stakeholders. Journal of Extension [On-line], 40(4). Available at: http://www.joe.org/joe/2002august/tt3.shtml

Diem, K. (1997). Measuring impact of educational programs. Rutgers Cooperative Extension fact sheet #869. New Brunswick, NJ: Rutgers University.

Patton, M.Q. (1978). Utilization-focused evaluation. Beverly Hills: Sage Publications.

Wentling, T.L. (1980). Evaluating occupational education and training programs. Boston: Allyn and Bacon, Inc.