The Journal of Extension - www.joe.org

February 2016 // Volume 54 // Number 1 // Ideas at Work // v54-1iw1

Making Evaluation Work for You: Ideas for Deriving Multiple Benefits from Evaluation

Abstract
Increased demand for accountability has forced Extension educators to evaluate their programs and document program impacts. Due to this situation, some Extension educators may view evaluation simply as the task, imposed on them by administrators, of collecting outcome and impact data for accountability. They do not perceive evaluation as a useful tool in Extension programming and, therefore, pay little or no attention to it. The purpose of this article is to describe how to integrate evaluation into Extension programming to gain all the benefits evaluation offers. These benefits include program improvement, monitoring, and marketing and Extension advocacy.


K. S. U. Jayaratne
Associate Professor and State Leader for Program Evaluation
Department of Agricultural and Extension Education
North Carolina State University
Raleigh, North Carolina
jay_jayaratne@ncsu.edu

Introduction

With budgets shrinking, Cooperative Extension is competing for limited funds, and policy makers are demanding more accountability for publicly funded programs (Lamm, Israel, & Diehl, 2013; Peters & Franz, 2012). As a result, Extension administrators are placing emphasis on the need to document program outcomes and impacts to attain increased accountability. This heightened demand for accountability by administrators is forcing Extension educators to take steps to evaluate program outcomes and impacts (Baughman, Boyd, & Kelsey, 2012).

In general, Extension educators tend to pay more attention to program planning and delivery than to evaluation. Extension educators may view evaluation more as the task, imposed on them by administrators, of collecting outcome and impact data to demonstrate accountability than as a meaningful aspect of programming. Because of this mind-set, some Extension educators do not perceive evaluation as integral to Extension programming and pay little or no attention to evaluation at the program planning stage. Some think about evaluation only at the time of program delivery or even not until after delivering a program.

The vision behind utilization-focused evaluation is that evaluation contribute to program effectiveness and improved decision making (Patton, 1997). Use of evaluation data for accountability is only one application of evaluation. Evaluation can be used for program improvement, program monitoring, program marketing, and organizational advocacy. However, these other uses of evaluation have been neglected by Extension educators due to the heightened demand for accountability. If this situation is changed in favor of expanding the use of evaluation to areas beyond accountability, Extension evaluation will be a cost-effective and meaningful endeavor for Extension educators. Integrating evaluation into Extension programming from the onset is the practical approach for achieving this change.

Many Extension educators need help understanding why and how to integrate evaluation into programming and, thereby, derive all the benefits evaluation has to offer. The purpose of this article is to discuss a practical approach to integrating evaluation into the Extension programming process to maximize the benefits of Extension evaluation. To this end, this article explores answers to the following questions:

  1. What are the major benefits of Extension evaluation?
  2. How can evaluation be integrated into the Extension programming process?

What Are the Major Benefits of Extension Evaluation?

Benefits that can be achieved by evaluating Extension programs are accountability, program improvement, program monitoring, program marketing, and promotion of Extension. Any effort to integrate evaluation into Extension programming relies on an understanding of these benefits.

Accountability

The data and information related to number of outputs (such as number of programs delivered), number of clients served, outcomes generated, and impacts created can be used to justify the costs of generating Extension programming. This is the widely known and most commonly derived benefit of Extension evaluation.

Program Improvement

Extension evaluation can be used to identify program strengths, program weaknesses, challenges for programs, and ideas for program improvement. This information can then be used to make strategic decisions related to eliminating the weaknesses in and capitalizing on the strengths of a program.

Program Monitoring

Using information to align program implementation with the original program plan is known as program monitoring. When implementing planned programs, process evaluation data can be used to determine whether the implementation is occurring as planned. If the program implementation is ahead of or behind the planned schedule or otherwise unaligned with expectations, process evaluation information can be used to understand the reasons for the situation and to make informed decisions about how to realign the implementation process.

Program Marketing

Marketing an Extension program includes communicating with potential audiences and convincing them to participate in the program. For ongoing programs, evaluation data collected from previously presented rounds of the program can be used to communicate the benefits derived by participants, thereby convincing potential audiences to take part in the program in the future.

Promotion of Extension (Organizational Advocacy)

Evaluation data, such as outcomes and impacts, demonstrate the benefits derived by program participants. These data can be used to promote the organization for the purpose of gaining public support for Extension. Organizational advocacy such as this is vital for securing public funds for all Extension programs.

How Can Evaluation Be Integrated into the Extension Programming Process?

Evaluation can fit naturally into the Extension programming process. The Extension programming process begins with the identification and prioritization of the needs of the target audience. Then those prioritized needs are used to develop program objectives. Next, the program content, instructional strategies, and delivery techniques necessary for achieving the program objectives are developed.

At this point—before program implementation—the evaluation plan should be developed to facilitate the rest of the educational programming process. When the evaluation plan is developed and conducted in such a way, the educator can determine whether the instructional strategies and delivery methods will be effective in achieving the program objectives. Additionally, a proper evaluation plan established during program planning will aid the educator in later determining what contributed to success or failure in achieving objectives. The following steps are helpful in developing a useful evaluation plan:

  1. When the program plan is devised, decide on the type and number of educational activities (outputs) and the size of the target audience to be reached within a particular time frame. During program implementation, the process will be assessed against these output and audience targets to determine whether the program is being implemented as planned. If the program is ahead of or behind the target, reasons for that situation will be identified and used for program improvement.
  2. After program objectives are developed, predict the possible outcomes and impacts of achieving those objectives. The logic model can be used to determine the potential outcomes and impacts (McLaughlin & Jordan, 2010). The target audience should be able to meet their needs by achieving these outcomes and impacts. If this condition is fulfilled, outcomes and impacts can be used to market the program to potential audiences.
  3. Identify indicators of potential outcomes and impacts of the program for determining whether the program objectives are achieved or not. These indicators should be practical and useful for carrying out the program and meaningful to Extension stakeholders.
  4. Decide what types of data need to be collected for outcome indicators. These indicators include changes in knowledge, attitudes, skills, aspirations, and behavior/practice and improvements in social, economic, and environmental conditions. Some of the outcome indicators, such as changes in knowledge or skills, can be measured soon after implementing the program. Other outcome indicators, such as behavior and practice changes, can be measured only some time after completing the program.
  5. Decide what type of information is needed for program improvement. Such information includes but is not limited to identification of program strengths and weaknesses and suggestions for improvements.
  6. Develop needed evaluation instruments for collecting data for outcome indicators and the information necessary for program improvement.
  7. Determine what type of evaluation design to employ for collecting outcome data. Commonly used designs in Extension include pretests and posttests and follow-up evaluations.
  8. Decide on a tool for data collection. The most commonly used tools include printed surveys, interviews, and online surveys.
  9. Determine how data will be analyzed.

Taking these steps at the program planning stage allows for evaluation to be conducted meaningfully and purposefully. To use evaluation as a practical and comprehensively beneficial tool in Extension, it is important to understand all potential applications of evaluation and to incorporate the evaluation plan into the overall programming process.

References

Baughman, S., Boyd, H. H., & Kelsey, K. D. (2012). The impact of the Government Performance and Results Act (GPRA) on two state Cooperative Extension systems. Journal of Extension [Online], 50(1) Article 1FEA3. Available at: http://www.joe.org/joe/2012february/a3.php

Lamm, A. J., Israel, G., & Diehl, D. (2013). A national perspective on the current evaluation activities in Extension. Journal of Extension [Online], 51(1) Article 1FEA1. Available at: http://www.joe.org/joe/2013february/a1.php

McLaughlin, J. A., & Jordan, G. B. (2010). Using logic models. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Hand Book of Practical Program Evaluation (3rd ed.) (pp. 55–80). San Francisco, CA: Jossey-Bass.

Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage.

Peters, S., & Franz, N. K. (2012). Stories and storytelling in Extension work. Journal of Extension [Online], 50(4) Article 4FEA1. Available at: http://www.joe.org/joe/2012august/a1.php