The Journal of Extension - www.joe.org

April 2013 // Volume 51 // Number 2 // Research In Brief // v51-2rb3

Creating a Minnesota Statewide SNAP-Ed Program Evaluation

Abstract
Systematic evaluation is an essential tool for understanding program effectiveness. This article describes the pilot test of a statewide evaluation tool for the Supplemental Nutrition Assistance Program–Education (SNAP-Ed). A computer algorithm helped Community Nutrition Educators (CNEs) build surveys specific to their varied educational settings and curricula. The algorithm determined whether a written evaluation survey was appropriate for audiences and provided a selection of questions based on key nutrition messages presented. Feedback from CNEs regarding the evaluation tool-building process with pre-existing questions indicated that, with revisions, there was value in implementing it on a statewide basis.


Abby Gold
Nutrition and Wellness Specialist/Assistant Professor
University of Minnesota Extension and North Dakota State University Extension Service
Fargo, North Dakota
abby.gold@ndsu.edu

Trina Adler Barno
Assistant Extension Professor
University of Minnesota Extension Regional Office
Mora, Minnesota
barno001@umn.edu

Shelley Sherman
Assistant Extension Professor Regional Office
University of Minnesota Extension Regional Office
Andover, Minnesota
sherm028@umn.edu

Kathleen Lovett
Assistant Extension Professor Regional Office
University of Minnesota Extension Regional Office
Rochester, Minnesota
klovett@umn.edu

G. Ali Hurtado
Research Fellow
University of Minnesota Extension
St. Paul, Minnesota
hurt0033@umn.edu

Introduction

The goal of the University of Minnesota Extension's Supplemental Nutrition Assistance Program-Education (SNAP-Ed) program is to engage SNAP eligible participants to choose healthful, safe foods and active lifestyles using the information and skills gained as a result of SNAP-Ed programming. SNAP-Ed programs employ a wide range of activities, varying in intensity, instructional method, and topic areas, making it a challenge to develop a common core of measures to understand which approaches work best (Guthrie, Stommes, & Voichick, 2006) and to systematically evaluate outcomes. The program as a whole needs to empirically demonstrate improved knowledge and changed behaviors based on the delivery of researched-based curricula and programs (Guthrie, Stommes, & Voichick, 2006). Studies have examined the effectiveness of specific statewide curriculum and materials (Betterly & Dobson, 2000; Hoover, Litchfield, & Martin, 2009a; Hoover, Litchfield, & Martin, 2009b; Jones, Larke, & Nobles, 2006), which adds to the research-base for various curricula used.

In a review of evaluation instruments and methods used in nutrition interventions, Contento, Randell, and Basch (2002) suggested that evaluation methods should be well tailored to the "purpose, duration, and power of the intervention" (p. 12). Their review recommends less burdensome, shorter evaluation instruments for low-literacy participants (Contento, Randell, & Basch, 2002).

In Minnesota, SNAP-Ed reaches over 65,000 low-income individuals annually with lessons aimed at improving diet quality, food safety practices, food resource management, and food security. A limited number of primary key nutrition messages are emphasized to all participants in Minnesota, including:

  • Increase consumption of fruits and vegetables, low-fat or fat-free calcium-rich foods and beverages, and whole grain foods;
  • Increase regular physical activity; and
  • Enroll in food support (stressed with adult audiences).

To inspire behavior change, culturally sensitive nutrition curriculum must address multiple factors that affect eating practices (Jones, Larke, & Nobles, 2006). Challenges such as participant literacy and language levels, learning styles, mental and physical disabilities, severe economic stress, sub-optimal teaching situations, and variable attendance influence the ability to secure outcome data from Minnesota SNAP-Ed audiences. Furthermore, Minnesota SNAP-Ed programming itself varies from course to course in duration, learning settings, curriculum, and teaching methods.

Prior to the project described here, paraprofessional Community Nutrition Educators (CNEs) in Minnesota were using a variety of tools to gather and assess participant outcomes. Evaluation mainly focused on process measures (such as number of participants served). Therefore, Minnesota SNAP-Ed sought to design a statewide uniform evaluation system that measured participant behavioral outcomes and knowledge gains. This article has three main objectives:

  1. Reflect on the process used to develop a consistent, systematic, statewide evaluation system.
  2. Highlight the results of educator feedback gathered during pilot testing of the statewide system.
  3. Discuss important next steps and considerations in implementing a statewide evaluation system.

Methods

Objective 1: Evaluation System Development Process

To begin development of a standardized, uniform, statewide outcome evaluation for Minnesota's SNAP-Ed program, a systematic examination of all aspects of SNAP-Ed programming was undertaken. Program and evaluation activities were sorted according to Jacob's five-tiered approach to evaluation (pre-implementation, accountability, program processes, program outcomes, and program impacts) to isolate the activities that fell under the purview of program outcomes (Jacobs, 1988; Mistry, Jacobs, & Jacobs, 2009; Nielsen, 2011). SNAP-Ed participant characteristics were examined. Program resources, including academic expertise, available finances, Minnesota's paraprofessional staff, and the settings in which we provide nutrition education, were also defined.

A major requirement for development of the final evaluation system involved flexibility to accommodate variations in intensity, instructional method, topic areas, and, most important, audiences. Deliberate decisions about preferences were made to ensure that the written evaluation system is appropriate for the audience, easy to implement, has a clear relationship to programming, and is efficient and effective. These decisions include the following.

  • The evaluation system will embody a level of flexibility that allows it to be tailored to programming circumstances.
  • The evaluation tool will be formatted as a reflective post-test survey.
  • A limited number of USDA key messages (5) will be evaluated.
  • Youth participants in 3rd grade and older will be surveyed.
  • A specific, determined number of questions will be asked of each audience.
  • A dosage at which knowledge-based evaluations are suitable (2-4 class sessions), and one at which behavior-based evaluations are appropriate (5 or more sessions) will be determined.
  • Audiences receiving two or more sessions will be surveyed; this criterion was determined based on program experience. A review by Olander (2007) about the role of dosage in nutrition education was inconclusive as to the amount of exposure needed to drive behavior change. Generally, repeated exposures lead to greater levels of change.
  • A definition of what constitutes an appropriate setting for a written evaluation (conducive environment) will be determined (writing space and ability to provide individual assistance).
  • A sufficient level of literacy will be required.

To accommodate the great variety in program intensity, instructional methods, topics, and audiences, a decision-making algorithm was developed consisting of decision points around audience age-range, lesson duration, participant literacy, and the nutrition education setting. A bank of questions was developed after examining the following sources: USDA impact indicators from EFNEP (Expanded Food and Nutrition Education Program), nutrition knowledge evaluation questions developed by other state Extension programs, and Minnesota SNAP-Ed evaluation tools. Question banks were developed or adapted around the five primary key nutrition messages taught in Minnesota and were written below the 6th grade reading level. Knowledge and behavior questions were written specifically for children in grades 3-6, teenagers, and adults/seniors. Standardized templates in a retrospective post-test format were then developed for each audience (Figure 1).

Figure 1.
Survey Example

Survey Example

Evaluation System Pre-Testing

The decision-making algorithm was transferred into a Web-based format to assist CNEs in tailoring which, if any, written evaluation tool to use in their classes (Figure 2). The algorithm asks a series of questions about audience characteristics, then links to an appropriate template and question bank so a form can be built. The resulting form was a 10-question retrospective pre-test-post-test evaluation (Nielsen, 2011). The algorithm was then pre-tested with 25 CNEs across Minnesota over a 6-month period. A user survey garnered feedback about the decision-making algorithm, resulting evaluation forms, and individual evaluation questions.

After the first pre-test, the algorithm and questions were revised in response to feedback, and the system was piloted statewide for approximately 8 months. Ninety-eight CNEs were trained and expected to use the system. CNEs were again surveyed for feedback on the system's overall utility. Results from the pilot, including CNE impressions and recommendations for both the decision-making algorithm and the individual questions, are reported in the following section. Institutional Review Board exempted the pilot study with CNEs, because it was considered program improvement.

Figure 2.
Example of the Decision-Making Algorithm

Example of the Decision-Making Algorithm

Results of Evaluation System Pilot Test

Objective 2: Pilot Test Decision-Making Algorithm Feedback

Table 1 highlights the results of the CNE pilot test and online survey that gathered their feedback. A total of 64% of the responding CNEs (n=62) indicated that they found the algorithm "useful" or "very useful." Ten percent of the CNEs indicated that the algorithm was "not very useful," and 3% indicated that it was "difficult to use."

Regarding the ease of answering the questions about their audiences and teaching circumstances, 69% reported that they felt it was "easy" or "very easy" to do so. Forty-three percent of respondents indicated that there were circumstances that did not fit well with the choices offered. Comments regarding this response centered primarily on two issues: 1) CNEs believed too many audiences were excluded from written evaluation based upon the criteria of the algorithm (n=9), and 2) they believed too few key messages were addressed in the question banks (n=8). A combination of CNE training and clarifying language with the algorithm addressed the issue of audience circumstances and key messages.

Roughly 1/3 of CNEs surveyed indicated that 75% or more of their audiences were not issued a written evaluation based on the criteria built into the algorithm. Finally, 70% of CNEs reported that they thought the algorithm would be a useful tool for new CNEs.

Table 1 also highlights the results of the feedback about the evaluation questions. For all audiences—adults/seniors, teens, children—CNEs reported that the majority of their participants successfully completed the written evaluations generated by the evaluation system without problems. Additionally, the majority of CNEs reported that there were enough question choices for all audiences to cover the messages taught in their courses. Eighty-three percent of respondents expressed that they "moderately", "very much," or "totally" like the option of selecting post-test statements from a pre-existing list. CNEs also expressed concerns and/or suggested revisions regarding several of the questions in the bank. These were incorporated into the next iteration of the system.

Table 1.
Select Results from CNE Pilot Test of Evaluation System

How helpful was the Evaluation Decision Making Game in guiding you to make choices about evaluating your teaching? A little useful (24.2%) 85.5%
Useful (38.7%)
Very useful (22.6%)
Please check all of the items below that describe your thoughts about the Evaluation Decision Making Game: The game didn't always cover the circumstances that I ran into with my audiences. 46.7%
The game is easy to use. 55.0%

The game questions are easy to understand. 48.3%

The game makes decision-making around evaluation easier. 43.3%
Do you think the Evaluation Decision Making Game might be a useful tool for new CNEs? Yes 70.5%
From the list below, please check anything you have observed about your participants taking the post-test. The majority of participants have been successfully completing the post-tests without any problems 55.6%
Are there enough choices of evaluation questions/statements on the lists for the evaluation tools for you to evaluate the lessons you teach to your participants in the priority Key Message areas? There are enough choices; I am nearly always able to find evaluation questions/statements to evaluate my lessons in the priority Key Message areas. 50.5%
There are not quite enough choices; I sometimes have trouble finding evaluation questions/statements to evaluate my lessons in the priority Key Message areas. 23.7%
How do you like the option of selecting post-test evaluation questions/statements from a list of pre-existing questions? Totally like it (27.1%),

83.4%
Very much like it (29.2%)
Moderately like it (27.1%)

Discussion

Objective 3: Next Steps for the Statewide Evaluation System

SNAP-Ed uses research-based curricula and pedagogy to educate SNAP-eligible audiences about nutrition and health. Evaluating the impact of the program on a variety of audiences is of utmost importance in order to demonstrate participant knowledge gain and behavior change. Because of SNAP-Ed's inherent learner-centeredness and various program delivery options, systematic evaluation becomes challenging. The Minnesota SNAP-Ed program has attempted to develop a systematic, statewide approach to determining program impact.

A systematic evaluation requires preliminary work and pre-testing with both educators and audiences (Guthrie, Stommes, & Voichick, 2006). After 2 years of development and testing, the evaluation decision-making system was launched across the state, and almost 10,000 surveys were completed. However, based on the decision-making algorithm, a high percentage of SNAP-Ed audiences were excluded from evaluation. This barrier requires the evaluation team to clarify how to evaluate multi-age, multi-literacy level, and multiethnic groups for whom written measures may not be effective.

The development and evaluation of instruments that accurately measure nutritional factors in a variety of populations is challenging. Kristal et al. (1990) in their validation of a short checklist, found support for using rapid measures to assess nutritional intake and suggest the use of rapid measurement tools in the evaluation of public health nutrition programs. Minnesota's SNAP-Ed evaluation program uses a rapid assessment instrument that is easily adapted to include the key messages emphasized during any given programming event.

A current challenge involves the difficulty of using written surveys to evaluate low-literacy and English Language Learner (ELL) participants. It is probable that evaluation tools that address oral-language learners can be used universally, although that theory needs testing. Townsend (2006) used an iterative process to simplify text and include visual content in her surveys; she describes a methodical way to develop evaluation tools that address the needs of a variety of SNAP-Ed participants. Teaching methods are designed to address the variety of learning styles, but innovative learner assessment tools have not kept pace with the pedagogy addressing varied learning styles, especially in public health nutrition programs. The use of pictures and visual aids in both the educational and evaluation portion of a program can provide support for understanding the impact a program has on people with varied literacy skills, languages, and cultures.

Conclusions/Implications

The positive feedback received from the CNEs regarding the "algorithm" and the tool-building process with pre-existing questions indicates that, with revisions, this system would be valuable and well received if implemented on a statewide basis. However, while written evaluation tools are widely used for measuring outcomes, they raise challenges for many SNAP-Ed audiences. A "bank" of alternative (non-written) outcome evaluation options would be a valuable addition to this system. This finding was also noted in Hoover, Litchfield, and Martin (2009a) who stressed the importance of open-ended, application-type questions to examine higher-level cognition among participants

The formative questions from the algorithm development process used for the system described here are readily applicable to community-based nutrition education programming as well as other kinds of programming. The evaluation of the question decision-making tool is certainly a model that can be used with other SNAP-Ed programs across the country because the nutrition key messages are similar across programs. With its ease of use and standardized end-results, this kind of system could well serve other community based educational efforts.

References

Betterly, C., & Dobson, B. (2000). Tools for evaluating written and audiovisual nutrition education materials. Journal of Extension [On-line], 38(4) Article 4TOT3. Available at: http://www.joe.org/joe/2000august/tt3.php

Contento, I., Randell, S., & Basch, C. (2002). Review and analysis of evaluation measures used in nutrition education intervention research. Journal of Nutrition Education and Behavior, 34, 2-25.

Guthrie, J. F., Stommes, E., & Voichick, J. (2006). Evaluating food stamp nutrition education: issues and opportunities. Journal of Nutrition Education and Behavior, 38, 6-11.

Hoover, J. R., Litchfield R. E., & Martin P. A. (2009a). Qualitative tools to examine EFNEP curriculum delivery. Journal of Extension [On-line], 47(3) Article 3FEA3. Available at: http://www.joe.org/joe/2009june/a3.php

Hoover, J. R., Litchfield R. E., & Martin P. A. (2009b). Evaluation of a new nutrition education curriculum and factors influencing its implementation. Journal of Extension [On-line], 47(1) Article 1FEA4. Available at: http://www.joe.org/joe/2009february/a4.php

Jacobs, F. H. (1988). The five-tiered approach to evaluation: context and implementation. In H. B. Weiss & F. H. Jacobs (Eds.), Evaluating family programs, New York: Aldine DeGruyter.

Jones, W. A., Larke, A., & Nobles, C. J. (2006). The effectiveness of a public nutrition education and wellness system program. Journal of Extension [On-line], 44(3) Article 3RIB5. Available at: http://www.joe.org/joe/2006june/rb5.php

Kristal, A. R., Abrams, B. F., Thornquist, M. D., Disogra, L., Croyle, R. T., et al. (1990). Development and validation of a food use checklist for evaluation of community nutrition interventions. American Journal of Public Health, 80, 1318-1322.

Mistry, J., Jacobs, F., & Jacobs, L. (2009). Cultural relevance as program-to-community alignment. Journal of Community Psychology, 37, 487-504.

Nielsen, R. B. (2011). A retrospective pretest-posttest evaluation of a one-time personal finance training. Journal of Extension [On-line], 49(1) Article 1FEA4. Available at: http://www.joe.org/joe/2011february/a4.php

Olander, C. (2007). Nutrition education and the role of dosage. Retrieved on from: http://www.mypyramidforkids.gov/ora/menu/Published/NutritionEducation/Files/LitReview_Dosage.pdf

Townsend, C. (2006). Evaluating food stamp nutrition education: process for development and validation of evaluation measures. Journal of Nutrition Education and Behavior, 38, 18-24.