August 2015
|
August 2015 // Volume 53 // Number 4 // Feature // v53-4a1
Food and Nutrition Extension Programs: Next Generation Impact Evaluation
Abstract
Grassroots stakeholder input results in relevant and timely Extension programs, but presents a challenge for performance measurement using common indicators. A balanced approach to program evaluation and reporting that is adequately valid and reliable while honoring the Extension culture of service is most likely to be successful. This article reviews recent advances in evaluation methodology of food and nutrition programs. It further describes how this evidence base informs the current set of national Extension program outcomes and indicators. Evaluation work is an essential step in documenting the public value of Extension programs.
Introduction
We evaluate Extension programs to identify areas of improvement, document accountability, guide decision-making, and articulate public value (Franz, Arnold, & Baughman, 2014). Historically, Extension programs established credibility by delivering research-based information to large numbers of people, thus fulfilling the land-grant mission to extend the knowledge of the university to the public. At that time, records of the number of program participants and their demographic characteristics were tallied by states and submitted to USDA as sufficient documentation that Extension programs were reaching their intended audiences and having a corresponding effect (Franz & Townson, 2008). Two decades ago a shift took place whereby Extension programs were challenged to provide data related to broad goals, performance appraisal, and public accountability as discussed by Lamm and colleagues (2013). More recently, Extension programs have been expected to submit approved plans of work with annual reports showing medium- to long-term behavioral and environmental impacts in order to receive federal Extension funding. Evaluation work is an essential step in documenting the public value of Extension programs (Franz, Arnold, & Baughman, 2014).
Because Extension systems are complex, program design, delivery, evaluation, and reporting are challenging on many levels (Franz & Townson, 2008). A grassroots approach to meeting local needs coupled with a research-driven agenda has resulted in a wide variety of approaches to Extension programming. This diversity is widely recognized as a strength of Extension programs, but presents a problem for outcome measurement using common indicators as a measure of long-term program impact (Franz, Arnold, & Baughman, 2014).
Flexibility in program delivery, differences in state evaluation capacity, and barriers to communication among federal, state, and local personnel all contribute to the complexity of demonstrating public value. It is difficult to isolate the impact of Extension programs from other changes over time (Franz & Townson, 2008). Further, a resistance to evaluation and lack of data aggregation systems hinder progress toward measuring the efficacy and impact of Extension work (Franz & McCann, 2007).
Extension professionals have a passion for helping people live better lives in strong, vibrant communities. A balanced approach to program evaluation and reporting that is adequately valid and reliable while honoring the Extension culture of service is most likely to be successful (Franz & Townson, 2008). Individualized evaluation may serve the needs of a county or state, but a continuous limitation has been the inability to merge and summarize data nationally (Franz, Arnold, & Baughman, 2014). An example of a nationally aggregated data system is WebNEERS, with its corresponding evaluation tools for the Expanded Food and Nutrition Education Program (EFNEP) and the land-grant component of the Supplemental Nutrition Assistance Program Education (SNAP-Ed). These federally funded programs provide food and nutrition education to limited-resource families. To accomplish the goal of having aggregate data, EFNEP uses evaluation tools with a mandated set of questions and response options employed by 50 states and seven territories. Although the ultimate evaluation goal for food and nutrition Extension programs is the capacity to aggregate data across states, we are suggesting this be accomplished by adoption of common impact indicators, not mandated evaluation tools. The difference is subtle in theory, but major in practice.
This article reviews recent advances in evaluation methodology of food and nutrition programs. It further describes how this evidence base informs the current set of national Extension program outcomes and indicators. Definitions and examples of key terms are provided in Table 1 to illustrate how they are used in this article. Our intent is to continue to document progress in Extension program evaluation and to encourage adoption and usage of these common indicators across states. In particular, we call on food, nutrition, and health faculty, specialists and local educators to work with state evaluation specialists and administrators to adopt nationally available indicators in their next plan of work. This is the next step toward demonstrating the public value of these Extension programs. By demonstrating the effectiveness of our work not just on the state level (as currently done), but with the capacity for national aggregation, we can build a strong case for facilitating better lives for the people we serve across 50 states.
Term | Definition |
Program | A coordinated set of activities designed to accomplish a set of purposes, by serving target audiences and improving their lives in identified areas of interest. A major endeavor authorized and funded to achieve a significant purpose, defined in terms of the principal actions or activities required. A program may cross organizational lines.1 |
NIFA Program Area | A National Institutes of Food and Agriculture selected area of focus for a fiscal year or series of years. For example, the area of childhood obesity. |
Input | Resources that go into a program in order to implement the activities successfully.2 |
Output | The direct products of program activities and may include types, levels, and targets of services to be delivered by the program. Immediate measures of what the program did.2 |
Outcome | Results, or changes for individuals, groups, communities, organizations, communities, or systems connected to the program. The results of program operations or activities; the effects elicited by the program.2 |
Impact | The longer-term changes or benefits for individuals, groups, communities, organizations, or systems that result from the program's activities.1 |
Impact/Evaluation Indicator | A factor, variable, or observation that is empirically connected with the criterion variable; a correlate. A specific, observable, and measurable characteristic or change that shows the progress a program is making toward achieving a specified outcome.2,3 |
Measurement Tool (Measure) | An instrument or device for assessing a construct, indicator, or behavior. |
Outcome Statement | A conclusion statement based on an interpretation of the results of a program evaluation. |
1Center for Program Evaluation and Performance Measurement, Bureau of Justice Assistance, https://www.bja.gov/evaluation/glossary/ 2 Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide, Centers for Disease Control and Prevention, http://www.cdc.gov/eval/guide/glossary/index.htm 3Scriven, N. Evaluation Thesaurus, Fourth Edition, Newbury Park, CA, Sage Publications, 1991. |
Evolution of Food and Nutrition Program Evaluation
Food and nutrition educators, and other professionals, have advanced methodology and techniques to evaluate programs designed to promote healthy eating (Contento, Randell, & Basch, 2002; USDA, 1997). These methods for evaluation and reporting are informed by several decades of progress in the evaluation of publicly funded programs and nutrition education evaluation. In 1995, Extension nutrition educators collaborated on the Impact Indicators Project (Chapman et al., 1995). The purpose of the project was to develop evaluation impact indicators for core nutrition education programs common among states so that national data could be aggregated to demonstrate impact (Townsend, Johns, Shilts, & Farfan-Ramirez, 2006). During the course of this project, perspectives on program evaluation were collected from field staff (Clark et al., 1995). Field staff expressed concern about literacy level and length of the evaluation tools, as well as having a standard methodology to collect participant feedback. Having support of a direct supervisor, state specialist, and state administrators increased the likelihood that field staff regularly conducted evaluation of nutrition education programs. Analysis of data from this project resulted in the following recommendations (Chapman-Novakofski et al., 1997):
- Strong administrative support and leadership is critical at local, state, and federal levels for effective program evaluation to occur.
- Extension personnel specializing in program evaluation should provide in-depth, sustained in-service education on program evaluation techniques.
- Evaluation tools must be user friendly and audience sensitive. Brief instruments that are flexible enough to be modified for specific programs and diverse audiences are preferred.
Shortly after the Impact Indicator Project was conducted, the Centers for Disease Control and Prevention (CDC) issued their landmark framework for program evaluation (CDC, 1999). The framework (Figure 1) is a practical tool that summarizes and organizes the steps and standards for effective program evaluation.
Figure 1.
Recommended Framework for Program Evaluation1
1 Source: Milstein, B. & Wetterhall, S. (2000). A framework featuring steps and standards for program evaluation. Health Promotion Practice, 1(3), 221-228.
The CDC program evaluation framework was widely adopted for use by public health organizations (Millstein & Wetterhall, 2000). The CDC framework and use of logic models continue to provide a framework for program design and to guide comprehensive evaluation plans with data collection tools (Freedman et al., 2014). Extension programs now devote significant resources for building evaluation capacity using logic models and the now familiar process of stakeholder input, program description, evaluation design, and communication of findings (Rennekamp & Arnold, 2009). A three-component approach to building Extension organizational capacity for evaluation was described by Taylor-Powell and Boyd (2008) as depicted in the Table 2.
Component | Elements |
Professional development | Training Technical assistance Collaborative evaluation projects Mentoring and coaching Communities of practice |
Resources and supports |
Evaluation and capacity building expertise Evaluation materials Evaluation champions Organizational assets Financing Technology Time |
Organizational environment |
Leadership Demand Incentives Structures Policies and procedures |
This framework for building Extension evaluation capacity is consistent with the five conditions of collective impact identified by Kania and Kramer (2013) for changes leading to social progress: (1) common agenda, (2) shared measurement process, (3) mutually reinforcing activities, (4) continuous communication, and (5) backbone support. Although designed to aid in cross-sector work involving different organizations working toward collective action, these conditions may be applied to any complex organization, such as Extension.
USDA Food and Nutrition Service (FNS), with nutrition education programs spread across several branches of the agency, also contributes significantly to the advancement of evaluation procedures. The USDA Office of Analysis, Nutrition, and Evaluation collaborated with FNS to issue a 2005 document describing the principles of sound impact evaluation (USDA 2005). The authors concluded USDA lacks reliable data on what specific type of nutrition education is provided, the outcomes of the services, and how they influence nutrition knowledge and dietary behaviors. Further, USDA program evaluators identified a need for evaluating system, environmental, and policy changes resulting from nutrition education programs, social marketing projects, and health communication campaigns (Gregson et al., 2001; Levine, Abbatangelo-Gray, Mobley, McLaughline, & Herzog, 2012).
Evaluations of two federally funded nutrition education programs, the Expanded Food and Nutrition Education Program (EFNEP) and land-grant institutions with Supplemental Nutrition Assistance Program Education (SNAP-Ed) have been conducted by Extension personnel (Wardlaw & Baker, 2012; Gold, Barno, Sherman, Lovett, & Hurtado, 2013). Evaluation capacity for SNAP-Ed programs has been advanced by a process of developing and validating measures for a variety of programs with limited resource audiences (Townsend, 2006). The measures must meet acceptable standards for validity, reliability, sensitivity, and internal consistency. In addition, the measures should be easy to administer and complete, and therefore would be brief and understandable for a SNAP-Ed audience. This approach was designed to assist nutrition education researchers and field staff in large, community-based nutrition education programs.
A comprehensive review of studies examining the effectiveness of interventions from 1980 to 1995 found that nutrition education was a significant factor in improving eating practices when behavioral change was set as a goal and education strategies were directed toward that goal (Contento, Randell, & Basch, 2002). Nutrition educators and Extension program developers now consider behavioral outcomes or systems changes to be the focus of their work. This perspective facilitated development of common indicators related to key behaviors associated with improved dietary quality and health. In addition to behaviorally based goals, nutrition education programs have benefited from a strong theory-driven approach to program design and professional development (Townsend et al., 2003). By modeling a theory-based approach in Extension in-service trainings, program developers and evaluators can help colleagues become more comfortable with their capacity to influence behavior change and to meaningfully measure and report that change.
Development of USDA NIFA National Outcomes and Indicators for Food and Nutrition Extension Programs
In 2008, NIFA national program leaders created the NIFA Nutrition and Health Committee for Planning and Guidance. Membership structure and operating procedures were established and subcommittees formed, including one on evaluation indicators. Survey data from 122 respondents representing 42 states indicates the majority of nutrition educators collect data on evaluation indicators related to changes in knowledge and targeted behaviors. Program areas most frequently evaluated were safe food handling, food resource management, healthy eating, and physical activity (Pena-Purcell et al., 2012). The evaluation subcommittee drafted evaluation indicators to capture changes in knowledge, behavior, intention, and policy, systems or environment
In 2010, a NIFA panel of experts was convened to improve the Extension plan of work process. By 2012, a set of national Extension outcomes and indicators had been identified for five planned program areas:
- Childhood obesity
- Climate change
- Food safety
- Global food security and hunger
- Sustainable energy
Program leaders working with the childhood obesity planned program area were able to develop indicators consistent with those developed by the evaluation subcommittee (Figure 2).
Figure 2.
Childhood Obesity Program Outcomes and Indicator Examples from Tools1
1Source: USDA NIFA National Outcomes and Indicators http://nifa.usda.gov/resource/how-report-areera-national-outcomes-and-indicators
Data collection tools for reporting could include items like:
- When you shop for groceries, how often do you buy vegetables for your family?
- How often do you keep vegetables, washed, trimmed, sliced and refrigerated, ready for your children to eat?
- How often do you play with your child outdoors?
In 2013, several states voluntarily selected indicators supporting these childhood obesity outcomes for inclusion in their state plans of work. In 2014, these states will be able to report toward the indicators selected. As data from state reports becomes available at the federal level in 2015, the collective impact of Extension programs in these states to address childhood obesity can be aggregated.
Next Steps to Demonstrate Impact and Public Value of Extension Food and Nutrition Programs
The groundwork has been laid and an infrastructure is in place to allow Extension food and nutrition programs to report against common impact indicators to demonstrate their value and impact. Our challenge is to continue to increase capacity and appreciation for evaluation among local and state personnel. Specialists and agents need to receive clear communication about how to, within the confines of their state system, incorporate national outcomes and indicators. Appropriate use of common measures will need to be integrated into the program planning, evaluation, and reporting process. The authors recognize that other behaviors related to childhood obesity, such as physical activity, are of paramount importance in the energy balance equation. However, due to limited space, we only addressed those related to food and nutrition in this article.
Evaluation of EFNEP and SNAP-Ed programs provides a significant body of evidence and experience that informs the current national outcomes and indicators for childhood obesity. However, unlike these specialized programs supported by specific federal funding streams, general food, nutrition, and health programs do not share commonly used evaluation tools. Many states have significant food, nutrition, and health programs delivered in ways other than EFNEP or SNAP-Ed. Best practices for encouraging use of national outcomes and indicators include identification of examples of state programs that target available indicators. The states' program evaluation and reporting tools for those programs need to be aligned with common indicators. This process will require several years in order to allow the indicators to be fully integrated into program planning, development, implementation, evaluation, and reporting.
By building organizational capacity to report toward common food, nutrition, and health indicators, we will be able to expand our current ability to demonstrate the efficacy and value beyond EFNEP and SNAP-Ed programming. The impact of Extension programs delivered to all segments of the U.S. population may for the first time be fully assessed. If food, nutrition, and health programs begin to report toward current outcomes and indicators, these are examples of outcome statements that could emerge:
- Extension programs helped reverse the trends in prevalence of childhood obesity.
- Participants in Extension programs reported eating more healthy foods, including fruits and vegetables.
- Policy, environmental, and systems changes facilitated by Extension programs have improved access to healthy foods for families.
- Safe food handling practices at home and other food venues have been improved.
Perhaps the next phase in development of evaluation methodology for Extension programs will assess the efficacy and impact of multi-disciplinary programs to address complex problems. For example, food environment and food systems provide the context in which individuals, families, and organizations participate in eating behaviors. A comprehensive evaluation, reporting, and set of common indicators to report how programs change the food supply and consumer eating behavior would require coordination across traditional areas of Extension programming (e.g., agriculture, youth development, family and consumer sciences, community development). An example of a multi-disciplinary impact evaluation is reported from North Carolina to assess a community gardening program (Jayaratne, Bradley, & Driscoll, 2009). Evaluation indicators for this gardening program included outcomes related to community development, horticulture, healthy lifestyle education, and youth development.
Implications for Practice
County Extension agents for Family and Consumer Sciences; Food, Nutrition, and Health specialists; and State Program Leaders and Evaluation specialists can take these four steps to facilitate widespread use of national outcomes and indicators:
- Specialists and faculty members in land-grant universities first determine if their state is delivering significant food, nutrition, and health programming in addition to EFNEP and SNAP-Ed.
- Specialists next align currently offered or newly developed programs with the outcomes and indicators chosen from national-level options for use in their respective states.
- Specialists, program leaders, and administrators subsequently provide guidance and a system that will allow County Extension Agents to develop a locally appropriate plan of work, deliver programs to meet stakeholder needs, and collect and report data toward appropriate evaluation indicators using appropriate tools for the local audience. Development and sharing of tool examples can help Extension personnel better understand how to craft culturally appropriate tools for collection of indicator data to be aggregated. Linking national outcome indicators to state-level programs will build capacity in the Extension system to demonstrate collective impact.
- Clear communication at federal, state, district, and local levels will then be needed regarding scope of reporting toward indicators and support for field staff engaged in program evaluation and reporting.
Resources to facilitate use of Extension plan of work and National Extension Evaluation Outcomes & Indicators include:
- University of Wisconsin Extension. Program Development and Evaluation Resources. Retrieved November 21, 2013 from http://www.uwex.edu/ces/pdande/evaluation/evaldocs.html
- University of Wisconsin Extension. Program Development and Evaluation Logic Model Resources. Retrieved November 21, 2013 from http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html
- How to Report on USDA NIFA National Outcomes and Indicators http://nifa.usda.gov/resource/how-report-areera-national-outcomes-and-indicators
Summary
This article provides a description of how planning, evaluation, and reporting of Extension programs have developed over the last few decades. More rigorous accountability requirements for organizations receiving federal funding have been established over the past 25 years. Extension programs, with complex funding, staffing, and stakeholder-driven programming are particularly challenging to evaluate and report on a national scale. Progress in evaluation methodology has allowed large organizations with a common agenda to agree upon shared indicators. Use of logic models with outcomes and indicators is now common among Extension programs. There is a recognized need to consistently measure change over time to document short-, medium-, and long-term outcomes. These advances have fostered development of a system with the potential to measure and communicate a comprehensive Extension story about the public value of food, nutrition, and health programs.
Acknowledgements
The authors acknowledge the assistance of Annie Lindsay in preparation of this article. The authors thank Marc Braverman for his excellent suggestions regarding Table 1.
References
Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. Morbidity and Mortality Reports, 48, (RR11), 1-40. Retrieved from: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm
Chapman, K., Clark, C., Boeckner, L., McClelland, J., Britten, P., & Keim, K. (1995). Multistate impact indicators project. Proceedings from the Society for Nutrition Education Annual Meeting, 20, 45.
Chapman-Novakofski, K. M., Boeckner, L., Canton, R., Clark, C. D., McClelland, J., Keim, K., & Britten, P. (1997) Evaluating evaluation -- What we've learned. Journal of Extension [On-line], 35(1), Article 1RIB2. Available at: http://www.joe.org/joe/1997february/rb2.php
Clark, C. D., Canton, R., Chapman, K., Boeckner, L., McClelland, J., Britten, P., & Keim, K. (1995). Perspectives on program evaluation among field staff. Proceedings from the Society for Nutrition Education Annual Meeting, 20, 45-46.
Contento, I. R., Randell, J. S. & Basch, C. E. (2002). Review and analysis of evaluation measures used in nutrition education intervention research. Journal of Nutrition Education and Behavior, 34:2-25.
Franz, N., Arnold, M., & Baughman, S. (2014). The role of evaluation in determining the public value of Extension. Journal of Extension [On-line], 52(4) Article 4COM3. Available at: http://www.joe.org/joe/2014august/comm3.php
Franz, N., & McCann, M. (2007). Reporting program impacts: Slaying the dragon of resistance. Journal of Extension [On-line], 45(6) Article 6TOT1. Available at: http://www.joe.org/joe/2007december/tt1.php.
Franz, N., & Townson, L. (2008). The nature of complex organizations: The case of Cooperative Extension. In M. T. Braverman, M. Engle, M. E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 5 – 14.
Freedman, A. M., Simmons, S., Lloyd, L. M., Redd, T. R., Alperin, M., Salik, S. S., Sweir, L., & Miner, K. R. (2014). Health Promotion Practice 15 (S1), 80S-88S.
Gold, A., Barno, T. A., Sherman, S, Lovett, K., & Hurtado, G. A. (2013). Creating a Minnesota statewide SNAP-Ed program evaluation. Journal of Extension [On-line], 51(2) Article 2RIB3. Available at http://www.joe.org/joe/2013april/rb3.php
Gregson, J., Foerster, S. B., Orr, R. Jones, L., Benedict, J., Clarke, B., Hersey, J., Lewis, J., & Zotz, K. (2001). System, environmental, and policy changes: using the social-ecological model as a framework for evaluating nutrition education and social marketing programs with low-income audiences. Journal of Nutrition Education, 33, S4-S15.
Jayaratne, K. S. U, Bradley, L. K., & Driscoll, E. A. (2009). Impact evaluation of integrated Extension programs: Lessons learned from the community gardening program. Journal of Extension [On-line], 47(3), Article 3TOT3. Available at: http://www.joe.org/joe/2009june/tt3.php
Kania, J., & Kramer, M. (2013). Embracing emergence: how collective impact addresses complexity. Stanford Social Innovation Review. Retrieved from: http://www.ssireview.org
Lamm, A. J., Israel, G. D., & Diehl, D. (2013). A national perspective on the current evaluation activities in Extension. Journal of Extension [On-line], 51(1) Article 1FEA1. Available at: http://www.joe.org/joe/2013february/a1.php
Lamm, A. J., & Israel, G.D . (2013). A national examination of Extension professionals' use of evaluation: does intended use improve effort? Journal of Human Sciences and Extension, 1, (1), 49-62.
Levine, E., Abbatangelo-Gray, J., Mobley, A. R., McLaughlin, G. R., & Herzog, J. (2112). Evaluating MyPlate: an expanded framework using traditional and nontraditional metrics for assessing health communication campaigns. Journal of Nutrition Education and Behavior, 44, S2-S12.
Milstein, B., & Wetterhall, S. (2000). A framework featuring steps and standards for program evaluation. Health Promotion Practice, 1(3), 221-228.
Pena-Purcell, N., Bowen, E., Zoumenou, V., Schuster, E. R., Boggess, M., Manore, M. M., & Gerrior, S. A. (2012) Extension professionals' strengths and needs related to nutrition and health programs. Journal of Extension [On-line], 50(3) Article 3RIB2. Available at: http://www.joe.org/joe/2012june/rb2.php
Rennekamp, R. A., & Arnold, M. E. (2009). What progress, program evaluation? Reflections on a quarter-century of Extension evaluation practice. Journal of Extension [On-line] 47(3) Article 3COM1. Available at: http://www.joe.org/joe/2009june/comm1.php
Taylor-Powell, E., & Boyd, H. H. (2008). Evaluation capacity building in complex organizations. In Braverman, M.T., Engle, M., Arnold, M.E. & Rennekamp, R.R. (Eds.) Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 55-69.
Townsend, M. S., Nitzke, S., Contento, I., McClelland, J., Keenan, D., & Brown, G. (2003). Using a theory driven approach to design a professional development workshop. Journal of Nutrition Education and Behavior, 35, 312-318.
Townsend, M. (2006). Evaluating food stamp nutrition education: process for development and validation of evaluation measures. Journal of Nutrition Education and Behavior, 38, 18-24.
Townsend, M. S., Johns, M., Shilts, M. K., & Farfan-Ramirez, L. (2006). Evaluation of a USDA nutrition education program for low-income Youth. Journal of Nutrition Education and Behavior, 38, 30-41.
USDA Food & Nutrition Service, Office of Analysis, Nutrition, and Evaluation. (2005). Nutrition education: Principles of sound impact evaluation. Retrieved from: http://www.fns.usda.gov/sites/default/files/EvaluationPrinciples.pdf
USDA Food and Consumer Service, Office of Analysis and Evaluation (1997). Charting the course for evaluation: How do we measure the success of nutrition education and promotion of food assistance programs? Summary of Proceedings.
Wardlaw, M. K., & Baker, S. Long-term evaluation of EFNEP and SNAP-Ed. (2012) The Forum for Family and Consumer Issues, ISSN 15405273. Retrieved from: http://ncsu.edu/ffci/publications/2012/v17-n2-2012-summer-fall/wardlaw-baker.php.