The Journal of Extension - www.joe.org

June 2009 // Volume 47 // Number 3 // Commentary // v47-3comm1

Commentaries conform to JOE submission standards and provide an opportunity for Extension professionals to exchange perspectives and ideas.

What Progress, Program Evaluation? Reflections on a Quarter-Century of Extension Evaluation Practice

Abstract
The September 1983 issue of the Journal of Extension was devoted entirely to the topic of program evaluation, marking the beginning of a new emphasis in Extension programming. This "call to action" was based largely on the need for program accountability; Extension educators could no longer afford to assume their programs worked or that their worth was self-evident. In the years since, evaluation in Extension has developed considerably. This Commentary explores a new call to action for evaluation in Extension, with a focus on more "logical" logic models, organizational evaluation capacity and support, and a greater emphasis on evaluation use.


Roger A. Rennekamp
Professor and Department Head
Youth Development Education
roger.rennekamp@oregonstate.edu

Mary. E. Arnold
Associate Professor and 4-H Youth Development Specialist
mary.arnold@oregonstate.edu

Oregon State University
Corvallis, Oregon

Just over 25 years have passed since the Journal of Extension published its landmark issue dedicated exclusively to program evaluation within the Cooperative Extension System (September, 1983 <http://www.joe.org/joe/1983september/index.php>). In response to a number of external reviews and assessments that questioned both the relevance and impact of Extension programs, that issue of the journal served as a "call to action" for Extension educators nationwide, challenging them to think differently about their work.

Extension educators could no longer afford to simply assume that their programs worked or that their worth would be self-evident. Answering the call to action would mean that they would need to develop new skills in program development, evaluation, and the effective use of evaluation results. Significant progress has been made in the quarter-century that followed. The use of logic models for program planning became widespread, the capacity of the Cooperative Extension System to conduct meaningful program evaluation has increased markedly, and data for decision-making is more readily available.

Despite these accomplishments, a new "call to action" is needed. The practice of logic modeling must better understood, commitment and capacity for evaluation must become more widely distributed, and the purpose of evaluation revisited.

Put the Logic into Logic Models

Models for program planning are not new to Extension. In 1975, Claude Bennett (1975) introduced Extension to a hierarchy for understanding the relationships between resources, activities, and results. Patrick Boyle (1981) and Ed Boone (1985) also made significant contributions to how Extension educators think about programming through their widely disseminated models for program development. But the familiar "input-output-outcome" language of logic modeling did not enter the Extension vernacular until the mid-nineties (Taylor-Powell & Boyd, 2008).

Faculty of University of Wisconsin Cooperative Extension built upon these basic evaluation frameworks to create a comprehensive program development model that would connect program planning, implementation, and evaluation (Taylor-Powell & Boyd, 2008). Seeing the effectiveness of using logic models in their home state, the Wisconsin team provided leadership to disseminate logic modeling throughout the Extension system. Today, many Extension educators use logic modeling as a tool for developing program plans and reports.

But logic models are far more than templates for preparing plans of work. Ideally, logic models should represent an underlying theory of why a program should work as intended. Inferences about the linkages presumed to exist between inputs, outputs, and outcomes can be based on research, intuition, experience, and, at times, untested assumptions. But logic models should at least represent plausible explanations of how a given program should work.

Without such understanding about what logic models represent, Extension educators tend to focus on "filling out the form" rather than using logic models as a framework for articulating program theory. Consequently, the linkages among inputs, outputs, and outcomes tend to be weak or unconfirmed. Evaluations must be designed in such a manner that the information they generate helps us confirm that the presumed linkages between actions and outcomes actually exist. As a result, the theory underlying the program becomes more mature.

Build Capacity and Organizational Support for Evaluation

Building capacity for program evaluation requires attention to many dimensions of organizational life. One approach to capacity building has been to offer training to Extension educators on how to conduct program evaluation, placing the expectation for conducting effective program evaluation solely on field staff. However, an old adage suggests that it is impossible to "train away" an organizational development problem.

Consequently, another approach to capacity building has been to hire one or more evaluation "specialists" to support the evaluation function across the organization. A recent survey of 41 Extension evaluators revealed that that most of these evaluators (37%) are placed in distinct program development and evaluation units. The rest are either a part of central administration, an academic unit, or program area. (Guion, Boyd, & Rennekamp, 2007).

While increasing the number of personnel with responsibility for evaluation activities was a positive move, these evaluators were often seen as the person who would save the organization from having to deal with the unsavory messiness of evaluation.

More recently, Taylor-Powell & Boyd (2008) have suggested that building evaluation capacity and developing organizational support go hand in hand. With such an approach, commitment and capacity for evaluation are developed broadly across the organization, where the evaluator serves more as an evaluation coach, working hand in hand with program staff to conduct program evaluations (Arnold, 2007). Over time, these experiential approaches develop evaluation capacity that is widely distributed across the organization, thus providing opportunities for organizational learning.

Rethink Evaluation Use

After experiencing years of program expansion and little demand for accountability, Extension found itself operating in a very different world for the last two decades of the twentieth century. Mary Andrews (1983) characterized the new context for programming as one in which it can no longer be:

Taken for granted that programs are good and appropriate. Extension is operating in a new environment-an environment more open to criticism and demands for justification of actions. All publicly funded agencies, not just Extension, are vulnerable to these times. In an era of accountability, Extension must be able to defend who and how people are being served. It also needs to document that programs are achieving positive results (1983, p.8).

Despite the widespread demand for accountability, an ongoing debate within the evaluation community is whether the ultimate goal of program evaluation is to "prove" or "improve." When evaluation findings are used to demonstrate to critics that a program is worthy of continued investment, we approach evaluation with the mindset of having "something to prove." Other times evaluations are done with the aim of discovering new information that will help improve the program. When such new information is shared with others, we improve more than just the program in question, but contribute to the body of knowledge that informs professional practice.

In 1983, the need to generate accountability data that documented Extension's value to society was first and foremost, almost to the point of detriment to developing a healthy perspective on what evaluation can or should be in a modern organization. Joan Thomson (1983, p. 3), then editor of the Journal of Extension, wrote in her introductory notes to the evaluation issue that the "rationale for conducting Extension program evaluation in today's complex environment . . . is often overshadowed by a suspicion of who, why, and for what is Extension being questioned." Consequently, individual and organizational learning took a back seat to countering the criticism that had been levied against Extension.

When Extension educators view information generated by program evaluation as something to be used by someone else, they miss important opportunities for learning and growth. Evaluation questions formulated from an accountability mindset often ignore issues related to cost, relevance, quality, and cultural responsiveness. We believe that program improvement should be given equal weight as accountability when formulating evaluation questions.

It's All About Learning

A deep understanding of the theory of change that underlies a program's logic model is essential to sound programming. When Extension educators possess that understanding, they are better able to formulate relevant evaluation questions. We also know that Extension educators need opportunities to participate in real-world evaluation projects, under the guidance of an evaluation coach, if they are to master skills needed to create program-relevant knowledge. Finally, Extension must learn to use evaluation results, not only to help justify a program's existence, but to improve it.

Learning organizations have a hunger for new information that makes them more efficient and effective. Through evaluation, members of the organization gain new information, insights, and perspectives on their programs that enable them to work in new ways. As they do, they rise to new levels of personal effectiveness and facilitate peak organizational performance.

References

Andrews, M. (1983). Evaluation: An essential process. Journal of Extension [On-line] 21(5). Available at: http://www.joe.org/joe/1983september/83-5-a1.pdf

Arnold, M. E. (2006). Developing evaluation capacity in Extension 4-H field faculty: A framework for success. American Journal of Evaluation, 27(2), 257-269.

Bennett, C. (1975). Up the hierarchy. Journal of Extension [On-line] 13(2). Available at: http://www.joe.org/joe/1975march/1975-2-a1.pdf

Boone, E. (1985). Developing programs in adult education. Englewood Cliffs, NJ: Prentice Hall, Inc.

Boyle, P. (1981). Planning better programs. New York: McGraw-Hill Book Company.

Guion, L., Boyd, H., & Rennekamp, R. (2007). An exploratory profile of Extension evaluation professionals. Journal of Extension, [On-line] 45(4) Article 4FEA5. Available at: http://www.joe.org/joe/2007august/a5.php

Taylor-Powell, E., & Boyd, H. H. (2008). Evaluation capacity building in complex organizations. In M. T. Braverman, M. Engle, M. E. Arnold &R.A. Rennekamp (Eds.) Program Evaluation on a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 55-69.

Thomson, J. (1983). What Value, Program Evaluation? Journal of Extension, [On-line] 21(5), 3. Available at: http://www.joe.org/joe/1983september/83-5-ed1.pdf