The Journal of Extension - www.joe.org

February 2017 // Volume 55 // Number 1 // Commentary // v55-1comm1

Commentaries conform to JOE submission standards and provide an opportunity for Extension professionals to exchange perspectives and ideas.

Evaluating Extension Impact on a Nationwide Level: Focus on Programs or Concepts?

Abstract
As agencies with minimal national reach and capacity grow more sophisticated in capturing public and private funding for outreach, Extension finds itself competing for national recognition of its scope and capacity. Because of the need for that recognition, it is increasingly important that states look beyond their individual systems of evaluation to cooperate in demonstrating the full extent of the Extension network for national stakeholders and funders. To do that, Extension must implement a nationwide system of evaluation, and that system should be built around the teaching of common concepts, rather than the delivery of common programs.


Dena K. Wise
Professor and Consumer Economics Specialist
The University of Tennessee Extension
Knoxville, Tennessee
dkwise@utk.edu

Evaluation Is More Important Than Ever

As agencies with minimal national reach and capacity grow more sophisticated in capturing public and private funding for national outreach, Extension finds itself competing for national recognition of its capacity and scope. Because of the need for that recognition, it is increasingly important that states look beyond their individual systems of evaluation to cooperate in demonstrating national outreach so that the full extent of the Extension network is realized and communicated to both national and global stakeholders and funders.

In spite of Extension's federal system of reporting to the U.S. Department of Agriculture National Institute of Food and Agriculture, there seems to be little clear articulation of Extension's national capacity to deliver educational programs that change the behavior of its clientele. Some of the reason for this situation may lie in Extension's funding history, under which Extension was not required to compete for funding on a program-by-program basis but was funded wholesale through Smith-Lever allocation. Additional reason may lie in individual states' autonomy in implementing their programs in very different ways. Evaluation systems and methods and even expectations for evaluation differ widely from state to state. But a more critical barrier to showing national program impact is the difficulty—given Extension's program development structure—of focusing evaluation on Extension programs themselves. This article addresses the idea of an alternative national impact evaluation system focused around the teaching of common concepts and the impact of that teaching on the behavior of the learner or the outcome of individual or community effort.

Evaluation in Extension

Across the Extension system, there is clear concern about the importance of accountability to the organization. Many articles on evaluation topics published in the Journal of Extension describe evaluation of individual activities or processes and are program specific; however, some articles propose models and methods of evaluation that might be used on a broader scale (e.g., Jayaratne, 2015, 2016; Kelsey & Stafne, 2012; Nielsen, 2011). Others describe criteria or aspects of evaluation that are relevant across the system for meeting standards for methodological integrity and rigor (Arnold & Cater, 2016; Braverman & Engle, 2009; Radhakrishna & Relado, 2009; Rennekamp & Arnold, 2009).

Regardless of the methodology employed, evaluation of individual programs is a matter of training educators at all levels on the evaluation process and having them implement that process. State and national administration and evaluation professionals have given a lot of attention to this task with the outcome being that more and more Extension educators at all levels are incorporating evaluation into the program planning process and reporting results, whether they use prepackaged programs or develop programs themselves (Workman & Scheer, 2012).

Including evaluation in the program planning process for individual programs, however, is entirely different from evaluating across Extension programs on a statewide or nationwide basis. Evaluating across programs—even programs in the same subject matter—will not be accomplished by training all Extension educators in the evaluation process. Doing so results only in many different evaluations of individual programs. Demonstration of Extension's national reach and scope can be accomplished through development of a national evaluation system. Though Lamm, Israel, and Diehl, in 2013, attempted to inventory what types of evaluation data are being collected across the system and Payne and McDonald (2012) reported on the use of common instruments to evaluate Children, Youth, and Families at Risk programs with varying delivery methods, there has been relatively little written to date about how to operationalize multistate or national evaluation.

The task of building such a system is predicated on important decisions that must be made regarding common units of evaluation. Conversation about this issue is taking place among specialists in mutual subject areas, and a few groups have moved toward appointing committees to study and make recommendations on the matter.

Collecting Impact Data on "Programs" May Not Build on Extension's Strengths

Some discussion among those who see the need for a national system for evaluation has centered on selecting "signature programs," or curricula widely used across states, and measuring outcomes and impacts from such programs. In fact, it is a relatively common belief among Extension specialists that local Extension educators should not vary from prescribed and "proven" curricula, taught in lesson series. Proponents of this thinking often stress the need for sequential learning, emphasizing that learning must be built, concept upon concept, to a level that tips the scale toward changes in behavior. National funding agencies augment this thinking by sometimes limiting their funding to "evidence-based programming," with strictly prescribed program protocols and varying standards for proof of program effectiveness. Program developers with proprietary interest in curricula or methodologies also want assurance that their programs will be applied as intended and their intellectual property will be protected.

Although funders' wishes to support programs that make a difference—and developers' proprietary interests—are understandable, there are difficulties associated with limiting Extension evidence of impact to evidence-based programming, or any programming with a strictly prescribed methodology.

  • Many packaged curricula are, in reality, compilations of concepts that are merely related to the same topic rather than built on sequential learning or problem-solving processes. Often they deliver information without engaging the learner interactively or involve only superficial interaction.
  • Both issues and audiences are fluid and dynamic, and a solid evidence base for program impact takes years to establish. By the time the program is approved as evidence based, it may no longer address the relevant issue and audience in the most expedient way. One of the strengths of the Extension network is its ability to respond to emerging needs rapidly.
  • It is difficult to incorporate accommodations related to ethnic or cultural factors in programs that follow precisely prescribed methods and to account for any resulting biases in evaluation (Dogan, Sitnick, & Onati, 2012).
  • Strictly prescribed methodologies make it difficult to adjust teaching methods to accommodate different learning styles of individuals in the intended audience. Indeed, the emphasis on strictly prescribed program methods may undercut the local educator's experience with and insight into the local audience and his or her capability to finesse programming to fit local culture and need—another major strength of the Extension network.

What Can Be Done to Find Common Evaluation Measures?

The key to evaluating across Extension programs nationwide is recognizing the unit of evaluation not as a program but rather as an individual teaching concept, learning task, or behavior change. A system centering on this idea would provide the capability for nationwide reporting to a common set of indicators spanning a variety of disciplines, programs, topics, delivery methods, and outcomes. For example, whether a farm management specialist worked with a group of farmers to budget expenditures for a crop rotation program or a family economics specialist retrieved data on how many college students created spending plans using an online learning program, each could report, on postprogram evaluations, the number of participants who "learned to make a plan for spending." Then, 3 to 6 months later, using follow-up with the same clientele, each could evaluate more long-term impacts by reporting the number of participants who "followed a spending plan," whether the spending plan was related to crop production or personal finance.

This system also would allow Extension, nationwide, to anchor its programs to common sets of core teaching concepts, learning tasks, and educator competencies across topic areas—all measured by common indicators. It would augment, rather than replace, current state systems for evaluation and reporting, yet it could serve as a model for more standardized reporting across Extension nationwide and for consistent measurement of a broad range of Extension program impacts. It would accomplish all this while leaving intact some of Extension's most unique and valuable programming assets: the flexibility to respond to emerging needs with just-in-time programming, the capability to engage clientele in defining issues and seeking solutions, and the capability to customize programming to local needs and cultures of learning.

Implementing such an evaluation system would entail (a) reaching agreement, among educators in different fields and disciplines, on a limited set of national/federal reporting indicators based on learning of individual concepts or skills and adopting of individual behaviors; (b) developing an online reporting system; (c) designating responsibility for populating the system with data—either to individual agents for populating the system with individual program data or to state specialists for populating the system with compiled state data; and (d) designating responsibility for data analysis and stakeholder reporting.

Essentially, Extension's programming strength lies in its capability to engage both local clientele and educators with public universities across the United States to apply educational knowledge when and where it is needed. It is essential that any nationwide evaluation system accommodate and capitalize on this strength. Such a system would establish Extension's capacity to deliver educational information on a nationwide basis and would establish a sustainable system of longitudinal measurement of national impact.

References

Arnold, M., & Cater, M. (2016). Program theory and quality matter: Changing the course of Extension program evaluation. Journal of Extension, 54(1) Article 1FEA1. Available at: https://www.joe.org/joe/2016february/a1.php

Braverman, M., & Engle, M. (2009). Theory and rigor in Extension program evaluation planning. Journal of Extension, 47(3) Article 3FEA1. Available at: https://www.joe.org/joe/2009june/a1.php

Dogan, S., Sitnick, S., & Onati, L. (2012). The forgotten half of program evaluation: A focus on the translation of rating scales for use with Hispanic populations. Journal of Extension, 50(1) Article 1FEA5. Available at: https://www.joe.org/joe/2012february/a5.php

Jayaratne, K. S. U. (2015). Cost effectiveness ratio: Evaluation tool for comparing the effectiveness of similar Extension programs. Journal of Extension, 53(6) Article 6TOT3. Available at: https://www.joe.org/joe/2015december/tt3.php

Jayaratne, K. S. U. (2016). Tools for formative evaluation: Gathering the information necessary for program improvement. Journal of Extension, 47(3) Article 1TOT2. Available at: https://www.joe.org/joe/2016february/tt2.php

Kelsey, K., & Stafne, E. (2012). A model for evaluating eXtension communities of practice. Journal of Extension, 50(5) Article 5FEA1. Available at: https://www.joe.org/joe/2012october/a1.php

Lamm, A., Israel, G., & Diehl, D. (2013). A national perspective on the current evaluation activities in Extension. Journal of Extension, 51(1) Article 1FEA1. Available at: https://www.joe.org/joe/2013february/a1.php

Nielsen, R. (2011). A retrospective pretest-posttest evaluation of a one-time personal finance training. Journal of Extension, 49(1) Article 1FEA4. Available at: https://www.joe.org/joe/2011february/a4.php

Payne, P., & McDonald, D. (2012). Using common evaluation tools across multi-state programs: A study of parenting education and youth engagement programs in children, youth, and families at-risk. Journal of Extension, 53(3) Article 3FEA5. Available at: https://www.joe.org/joe/2015june/a5.php

Radhakrishna, R., & Relado, R. (2009). A framework to link evaluation questions to program outcomes. Journal of Extension, 47(3) Article 3TOT2. Available at: https://www.joe.org/joe/2009june/tt2.php

Rennekamp, R., & Arnold, M. (2009). What progress, program evaluation? Reflections on a quarter-century of Extension evaluation practice. Journal of Extension, 47(3) Article 3COM1. Available at: https://www.joe.org/joe/2009june/comm1.php

Workman, J., & Scheer, S. (2012). Evidence of impact: Examination of evaluation studies published in the Journal of Extension. Journal of Extension, 50(2) Article 2FEA1. Available at: https://www.joe.org/joe/2012april/a1.php