Fall 1990 // Volume 28 // Number 3 // Feature Articles // 3FEA7
Analyzing Program "Failure"
Abstract
Program failure is seldom discussed in Extension. But when an Extension program fails to meet the educational objectives established for it, we should find out why so we can avoid future failure.
This article presents some ideas about the causes of program failure. It also makes a case for comprehensive evaluation as an element of program development to help Extension educators avoid, as well as identify, causes of failure.
Potential Causes of Failure
Program failure can result from inappropriate content, inadequate implementation, or low achievement on outcomes. Given the emphasis Extension has placed on documenting program impacts in recent years, Extension educators sometimes want to focus their evaluative activities exclusively on impacts. However, assessing impacts without examining content and implementation is an incomplete approach. Here's why.
Let's assume that a program impact evaluation indicates different or less-than-expected results for your program. For example, in a woodland management program you may have found that half as many people as expected made informed decisions about management of their woodlands. As an educator, what decisions can you make about modifying or continuing the program based on this information? None, because you have insufficient information. With only impact information in hand, some fundamental questions are left answered.
If program accomplishments fall short of expectations, we should ask four questions:
- Was the underlying program model faulty?
- Was the implementation of the program appropriate and
carried out as prescribed, or was it incomplete and different
from the plan?
- Were expectations of outcomes/impacts unrealistic, therefore
unattainable?
- Was the evaluation itself adequate - in terms of methods, impact, indicators examined, people contacted - to obtain valid, reliable, and programmatically meaningful information?
Answers to these questions are needed before we can judge that the "program" was unsuccessful and subsequently make management decisions about its future. Here are considerations pertinent to each question.
Program Model
Although we seldom think about it, every program we develop is based on some "model." Typically, these models are sets of assumptions, concepts, and hypothesized relationships that exist in the minds of programmers. If our models are flawed, our programs may flounder. For example, I might assume that a primary incentive for woodland management will be financial returns for effort expended when actually esthetic considerations are more important to woodland owners. A program established on the original assumption to help increase income from woodlands may not capture the interest of many owners.
Unfortunately, we seldom describe our models explicitly in writing so they can be studied by us or others. Overall, Extension educators need to be more conscientious and precise as we articulate our program models and more willing to hold these up for scrutiny by our peers.
An examination of the underlying model for a program ideally occurs during the planning stage of program development. If such an examination wasn't done before program implementation, you can try to articulate the program model afterward and assess its validity in light of the program implementation experience. Just be careful when a program doesn't meet your expectations not to jump immediately to the conclusion that your program model is faulty. It's possible that the model is basically excellent, yet the program wasn't implemented as planned or your expectations of impact were unrealistically high.
Program Implementation
Possibly the most frequent reasons for under-accomplishment among programs designed by experienced programmers are inadequacies or deviations occurring during program implementation. Even the most sound program model may result in unsatisfactory impacts if coupled with inadequate or inappropriate implementation. For example, I might plan on having woodland owners attend an introductory workshop then follow-up with an indepth home study course, but find that none of the woodland owners take the course because it required too much time or was offered in mid-winter when no one wants to go outdoors to conduct the field exercises.
To determine whether implementation strategies and actual activities are appropriate to meet program objectives, we need to undertake process evaluation. It's best to engage in process evaluation while the program is being implemented. In so doing, you can monitor program progress, identify problems as they develop, and take corrective action to put the program back on track.
From a managerial perspective, this is the best kind of evaluation - one that helps make a program successful rather than merely determines level of success after the program is over.
Realistic Expectations
One possible reason for not achieving the level of accomplishment expected at the inception of a program is that expectations aren't realistic - they just aren't reasonably achievable given the nature of the educational need, program resources, and time frame of the program. Some well-conceived, well-implemented programs don't have the results envisioned simply because expectations weren't realistic in light of the context of the program. When this is the case, it should be recognized. It would be a shame to misjudge the merits of a program because the simple comparison of anticipated to actual impacts indicated a deficiency.
Evaluation Methods Chosen
As in any form of inquiry - from program evaluations to field studies to lab experiments - the possibility exists for inadequate or inappropriate methods to be used. When a program evaluation indicates that a program didn't have the impacts expected, you might want to ask several questions about the evaluation method chosen. Were the right people asked for input? Were enough of them asked? Were they asked the right things in the right ways (validity and reliability concerns)? Were data analyzed and interpreted carefully?
For example, a telephone interview methodology could be used to obtain data in an evaluation of a woodland management program. The evaluation finds that among the sample of 20% of program participants contacted, only one-fourth actually conducted management practices, rather than three-fours as expected. On careful examination of the methodology, however, one finds that all telephone interviews were done on weekdays between 9 a.m. and 4 p.m. Further examination finds that 90% of those contacted were over 65 and retirees (they were likely to be home to answer the telephone during the hours when the calls were made). Program registration data indicate that 80% of participants were 35 to 55 years of age and employed. Thus, the methodology used missed getting a representative sample of program participants and possibly ended up contacting a segment least likely to change field practices.
Methodological problems can be alleviated primarily by working with an evaluation specialist to design your evaluation. Ask lots of questions. The evaluation specialist should ask you lots of questions, too. (If not, find someone else!) Get your colleagues to help you review your ideas about the evaluation. Ask them to review any data collection instruments you develop. Discuss your plans for data analysis. Ask them to be critical. It's better to get the difficult questions raised beforehand rather than after the time and effort to conduct an evaluation have been expended. Eventually, ask for comments on your interpretation of data, too.
Conclusion
A comprehensive approach to program evaluation is a key to interpreting the reasons why a program doesn't meet expectations. It's necessary to plan the evaluation before solidifying the program design and carrying out some evaluative activities before initiating the program. This isn't the approach taken by many people. Often Extension educators wait until the program is well underway, possibly near completion, before considering their evaluation needs. This approach makes it impossible for the programmer to use evaluative information to adjust and fine-tune the program during implementation. Unfortunately, valuable and possibly irretrievable program resources may be expended with less impact than might have resulted if evaluation had been ongoing and the program monitored during implementation.
In summary, an analysis of an apparent program failure should discern whether the fault lies with the program model, implementation strategy, expectations, or evaluation methods. This determination will provide information useful in planning your next program and may help avoid failure in the future.