Summer 1990 // Volume 28 // Number 2 // Forum // 2FRM1

Previous Article Issue Contents Previous Article

Organizational Philosophy for Program Evaluatio

Abstract


Daniel J. Decker
Assistant Professor and Extension Leader
Department of Natural Resources
Cornell University

Bettie Lee Yerka
Associate Professor
Department of Human Service Studies and Program Specialist
Program Development and Evaluation Office
Cornell Cooperative Extension
Cornell University, Ithaca, New York


Evaluation can be healthy for Extension if we attend seriously to its dual roles: (1) program management and improvement and (2) accountability and impact documentation. As one state program leader put it, "Evaluation should be 75% useful to the programmer and 25% useful for administrative reporting needs." Making evaluation useful for the programmer means using evaluation to support program decision making. Evaluation should be considered in this context!

Evaluation can help with planning decisions, implementation decisions, and tough decisions about continuing, modifying, or terminating a program. Although we receive all sorts of informal feedback on specific activities, these seldom indicate whether our program goals are being met. Through evaluation, we get systematic feedback on progress and results in programming. In essence, evaluation is an integral part of the program development process.

Program Accountability Is Important, Too!

Evaluation for improvement is important, but we still have a responsibility for accountability by demonstrating the results of our programs to funders and other stakeholders. Constituents need to know that their investment in Extension programs pays dividends and significantly affects people and their environments.

When thinking about evaluation for accountability, we need to remember that all three partners of the Extension System - local, state, and federal - have particular needs for evaluative information. These typically get more formal as you move away from the local scene, but at all levels the same basic concept applies - the Extension leadership makes claims about program needs to people who make decisions about funding support. The decision makers respond to the claims by appropriating funds, or not. They continue their support when presented with evidence that it led to desired impacts.

Essentially, we can view the funding process for Extension as striking a bargain (Figure 1). Extension's leaders at several levels make four general kinds of program claims:

1. An issue or problem exists that an educational intervention can help change.
2. Programs will address the issue or problem in particular ways (that is, specific use of funds).
3. Programs will have a particular impact.
4. Funds will be used effectively (program will have the impact specified) and efficiently (program will be cost-effective).

Figure 1. A bargain is struck.

These claims arise from documentation of existing educational needs or evidence about anticipated needs for selected segments of the public.

When Extension is persuasive, public officials respond with a promise of support. A "bargain" is thus struck. The bargain is fulfilled when Extension receives public funds and then demonstrates those funds were used as promised and had an acceptable level of success. This element is accountability (Figure 2).

Figure 2. A bargain is fulfilled.

The bottom line to this simple concept is that when we accept the funds to support our programs, we also accept responsibility for accountability. Consequently, we need to understand accountability requirements and build them into program evaluation activities that will also be useful for program improvement.

Top It Off with Common Sense

A final element of this emerging philosophy of program evaluation is common sense about the following:

1. Scope of Evaluation. We can't formally evaluate everything we do in Extension. Neither the time nor the resources are available. Thus, priorities must be established that concentrate on an important, evaluative component of each major program.

2. Realistic Goals. Given the limitations of time and other vital resources, we need to set levels of objectives that we can reasonably expect to accomplish within a planning period, using milepost indicators of progress along the way.

3. Recognition of Methodological Limitations. Some of our programs are difficult to evaluate from a methodological standpoint. Consequently, we have to examine each situation and make the best methodological choices we can, knowing that at times we'll have to settle for an imperfect approach.

Varying Roles in Evaluation

An important part of an organizational philosophy about program evaluation is developing common expectations of who's responsible for particular aspects of evaluation. Faculty, agents, and administrators have different, but significant, roles in accomplishing systemwide evaluation of major programs. For example, the following roles and responsibilities for program evaluation and accountability have been developed within Cornell Cooperative Extension:

1. Extension Administrators. Establish overall definitions and directions for accountability and evaluation, identify broad areas for study, facilitate resource acquisition, and foster a positive environment for evaluation and accountability among faculty, administrators, and field staff.

2. University Extension Faculty. Determine statewide data needs essential for managing and interpreting major programs to constituencies; help provide practical and feasible means to obtain data; maintain an interactive process with agents, peers, and administrators; and provide expertise and team leadership for formal and informal studies or assessments of major programs.

3. University Evaluation Faculty. Plan, design, and implement specific formal studies; develop resource material for use by county/regional staff, faculty, and others; develop and deliver inservice education for professional staff; and provide technical expertise on selected comprehensive studies.

4. Extension Agents and Specialists. Conduct appropriate county-level evaluation for program improvement/local accountability purposes; collect and document standard information for program accomplishments; identify needs/interests of local influentials and tailor communications on program results to these needs; and participate in statewide/multicounty studies that apply to locally identified program areas.

Conclusion

What we've presented is a developing perspective. An organizational philosophy about any topic is a living, growing, dynamic conceptualization. The key point is that evaluation isn't just a set of methods. Useful and effective evaluation must be built on the foundation of a well-articulated organizational philosophy for program evaluation.