June 2002 // Volume 40 // Number 3 // Feature Articles // 3FEA4

Previous Article Issue Contents Previous Article

Be "Logical" About Program Evaluation: Begin with Learning Assessment

Abstract
In an effort to build program planning and evaluation capacity in Extension faculty, this article focuses on assessing the learning that takes place in an educational program. Using logic modeling as the basis for meaningful evaluation, specific steps are outlined for measuring learning outcomes. These steps include articulating outcomes, turning outcomes into knowledge statements, and constructing a tool to measure perceived changes in knowledge. Although Extension educators are concerned not just with learning, but with action and social change that also occur, focusing on learning assessment provides a perfect opportunity to build skills in program planning and evaluation.


Mary E. Arnold
4-H Youth Development Specialist,
Oregon State University
Corvallis, Oregon
Internet Address: mary.arnold@orst.edu


Introduction

The value of evaluating Extension educational programs has received a great deal of attention recently, and many Extension educators are seeing evaluation as an integral part of their work. In recent years considerable effort has been put into creating Extension Service "cultures" that value evaluation. In addition, the use of logic modeling in performance measurement has been promoted across many programs (Curnan & LaCava, 2000; Hatry, van Houten, Plantz, & Greenway, M. T., 1996; "Logic Model," 2000). Even with these valuable efforts, however, many educators remain unsure of how to take the first step in evaluating their educational programs. This article is intended to help Extension faculty develop skills in program evaluation by focusing first on the assessment of learning, or short-tem outcomes.

The work of county Extension faculty is to address local concerns and needs through educational programming. The expertise of county faculty is often grounded in a specific knowledge base and not necessarily in research design and statistics. Given this, it is not hard to see why evaluation has traditionally been seen as an "add-on" or something that has to be done in response to administrative mandates. While we are making great strides in developing a culture that values evaluation, we have not yet reached the point where evaluation is seen to have an inherent place in our county programs.

As educational design specialist for the Oregon 4-H program, I have had the opportunity to assist county 4-H educators in the planning and evaluation of various 4-H programs. While I embrace, teach, and use a complete logic modeling process for program planning and evaluation, I find that trying to identify multiple evaluation points across the whole model with those who are just beginning is overwhelming, confusing, and at times results in a diminished sense of one's ability to conduct a program evaluation. Such reactions create barriers to conducting effective evaluations.

In response I have adopted a developmental approach to teaching evaluation, believing that once the basic ideas and tools of evaluation are mastered, other quests for knowledge can take place. One of the first steps in this approach is to dissect the program logic model into discreet parts and encourage the educator to focus on evaluating only one part of the program at a time, in this case beginning with the assessment of short-term, or learning, outcomes.

Logic Modeling

Meaningful evaluation grows out of sound program planning. Far from being an "add-on," evaluation begins with the initial planning of an educational program (Bush, Mullis, & Mullis, 1995). Logic modeling as an aid in program evaluation has received considerable attention in recent years (Curnan & LaCava, 2000; Hatry, van Houten, Plantz, & Greenway, M. T., 1996). Primarily due to the need to better understand the effects and impacts of our programs, and supported by the education outreach efforts of the University of Wisconsin Extension, an awareness of the usefulness of logic modeling in program planning and evaluation has swept Extension services across the country. In a nutshell, a logic model serves as a planning and evaluation tool. As a planning tool it can help educators identify what they will put into a given program (inputs) and what they hope to do and whom they hope to reach (outputs). The model also identifies short-, medium-, and long-term outcomes for the program (Figure 1). As an evaluation tool, it can help educators see what and when to evaluate.

Figure 1.
A Logic Model (Adapted from University of Wisconsin Extension: "Logic Model," 2000)

A Logic Model (Adapted from University of Wisconsin Extension: "Logic Model," 2000)

While logic modeling can serve as a useful tool in helping educators articulate the "program's theory of action" (Patton, 1997) or how a program is to produce desired results, it may be construed that one must jump to the long-term outcomes of the program in order to effectively evaluate the program. This is especially true when Extension educators are asked to demonstrate the "impact" of their program. In many cases impact is equated with long-term outcomes. I believe it is this tendency to assume one needs to demonstrate long-term outcomes that leads to the sense of being overwhelmed at the thought of conducting evaluations.

The beauty of a logic model, however, is in the fact that it clearly outlines the different levels of outcomes that are expected from an educational experience. This outline allows educators to identify the appropriate places to collect evaluation data, given the nature of the program's intent and design.

Because one of Extension's primary roles is teaching, it makes sense that one of the main places to which we should turn our attention is on short-term outcomes, focusing our first evaluation efforts on measuring what has been learned. This is not to say that medium-term outcomes (action) and changes in social conditions (impact) are not also important measurements of our success, depending upon the purpose of our program, but it does highlight the fact that the basic outcome of many of our educational programs is the learning that takes place.

While the intent of this article is to aid Extension educators in focusing evaluation on one point on a logic model, it is important to emphasize that such a focus could imply that logic models are linear in nature. A linear approach implies that learning leads to action and action leads to changes in social conditions. Such a linear movement is possible, but not necessarily what happens in many programs. It is important to stress, therefore, that logic modeling be seen as a dynamic, systems approach to planning and evaluating what is taking place. In doing so one is conceptualizing the program not just in a hierarchical manner (Bennett, 1975), but in a more complex and nuanced way.

Despite the risk of inadequately portraying the power of logic modeling by focusing on learning assessment, I believe that such a focus is helpful to educators for two primary reasons. First, focusing on learning assessment provides an entry point to understanding and using logic modeling for program evaluation. Second, focusing on learning assessment is a concrete and useful way for educators with little or no evaluation training to experience and practice systematic inquiry into the programs they provide. My experience has shown that such initial forays into program evaluation often lead to a desire to conduct more in-depth evaluation, which, in turn, leads to an increased use and understanding of logic modeling. In short, beginning with learning assessment is just thatòa great place to begin.

Learning Outcome Assessment

The first step to assessing learning is to use a logic model to determine the appropriate learning outcomes to measure, because what is learned needs to be connected to the inputs and the outputs for the program. Because many of the educators I work with do their teaching through workshops or seminars, one of the first things I ask is: "Given what you are planning to do, and who your audience is, what are the 2 or 3 threes main learning outcomes for your session?" This works very well for short sessions of 1-3 hours; longer sessions can be broken down into blocks of 1-3 hours, with the main learning outcomes for each block identified.

Once the educator is able to articulate the learning outcomes for his or her workshop, we begin to explore options for assessing the learning that takes place. Using a logic model forces us to clearly link the program activities to what is intended to be learned.

The assessment of learning outcomes can happen in many ways, depending on the situation at hand. For example, we have used observation as assessment in nutrition education programs for young children. One of the learning outcomes for the program is that children know the importance of washing their hands before eating as well as how to properly wash their hands. At specified times during the 2 weeks following the session on hand washing, teachers recorded which children spontaneously washed their hands when it was time for a snack. This observational method measured which children had achieved the program outcome of learning the importance and method of hand washing before eating.

In another setting, older children participating in a natural science curriculum with the outcome of learning the lifecycle of a salmon were asked to make drawings of the salmon's life. These drawings were done two times, once before the session on the salmon's lifecycle and again at the end of the session. The changes in the details of the two drawings provided a demonstration of what had been learned. The pictures drawn at the end of the session had considerably more detail and more accurately portrayed the lifecycle than those drawn at the beginning of the session.

An end-of-program questionnaire is also a useful way to assess learning. Questionnaires are helpful in obtaining immediate feedback about the effectiveness of a program in achieving its short-term outcomes (Taylor-Powell & Renner, 2000). One questionnaire method is a simple retrospective pre-test that is directly related to the learning outcomes for the session. Using the retrospective pre-test method, participants are asked to rate their knowledge of a given outcome at the end of the workshop and then rate their knowledge of the outcome prior to the session (Rockwell & Kohn, 1989). The participant's perception of his or her learning is then assessed by analyzing the difference in the reported level of knowledge before and after the workshop. There is recent evidence that conducting this type of learning assessment is a valid technique for capturing perceived changes in knowledge (Pratt, McGuigan, & Katzev, 2000).

Constructing a Tool

Once learning outcomes have been identified, a brief but effective learning assessment tool can be developed. Take, for example, three learning outcomes from an educational program designed to teach older teens knowledge about the transition to independent living. Learning outcomes may be stated like this:

  • Participants will know how to read and understand apartment rental ads.
  • Participants will know how to allocate financial resources to cover "needs" vs. "wants."
  • Participants will know how to establish spending goals.

Learning outcomes are then turned into statements of knowledge levels and placed in a well-organized format on a short questionnaire given to the participants at the end of the program (Figure 2).

Figure 2.
A Sample Learning Assessment Tool Using a Retrospective Pre-Test Method

Please help us understand what you learned through participating in Survivior Camp. Please indicate your rating both before the workshop session and after the workshop session on a scale of 1-5.

"1" indicates little or no knowledge and "5" indicates a great deal of knowledge.
After Survivor Camp: Before Survivor Camp:
I understand how to interpret apartment rental ads 1 2 3 4 5 I understand how to interpret apartment rental ads 1 2 3 4 5
I know how to allocate limited financial resources between needs and wants 1 2 3 4 5 I know how to allocate limited financial resources between needs and wants 1 2 3 4 5
I know how to establish spending goals 1 2 3 4 5 I know how to establish spending goals
1
2 3 4 5

After the questionnaires are completed, responses to each question can be analyzed with a paired "t" test to assess perceived changes in the participant's knowledge level.

By using a retrospective pre-test questionnaire, educators are able to assess perceived learning that takes place. Such a method is different from the more typical satisfaction questionnaire often used at the end of programs. Satisfaction questionnaires give educators insight into how well the participants liked the program, but do not provide any insight into what the participants learned. Even though retrospective pre-tests are useful for understanding perceived changes in participant learning, it is important recognize their limitations. End-of-session questionnaires provide only self-report information at one point in time, at the conclusion of the program (Taylor-Powell & Renner, 2000).

Conclusion

By using a logic model to specify the learning outcomes for an educational program, Extension educators are able to more accurately measure the learning that takes place. This information is useful both for program reporting (summative evaluation) as well as program improvement (formative evaluation). By articulating what the intended learning is and measuring whether the learning actually takes place, educators are participating in what Patton (1997) calls "reality testing," knowing whether our programs actually accomplish in reality what we think they do in theory.

As Extension educators, we all hope that our programs make an impact on social conditions. Such long-term program outcomes are important and should not be diminished or undervalued because they are more difficult to measure. Nonetheless, we need to be clear about what different programs can successfully accomplish. Logic modeling can help educators pinpoint the most realistic level at which to conduct a program evaluation. When knowledge change is the intent of the program, then it makes sense to focus evaluation efforts on the short-term outcomes, or the learning that has taken place.

The purpose of focusing on learning assessment in this article is not to imply that our evaluation efforts can end with knowing whether knowledge changed. In many cases, learning alone is not enough; there must be action that comes from the learning. Many of our stakeholders are looking for changes in behaviors and actions. In addition, we know that changes in knowledge do not always result in positive behavioral changes.

Despite these cautions, Extension educators can use learning assessment as a meaningful and useful place to begin to evaluate their programs. The simple step of articulating learning outcomes can serve to improve a program, for often when we look closely at what we want to have learned we see that we may need to change the content in our programs in order to accomplish the learning outcomes. Likewise, measuring change in knowledge level helps us to be more critical of our teaching. After all, there is little point in teaching if what we intend to impart is not learned.

References

Bennett, C. (1975). Up the hierarchy. Journal of Extension 13(2), 7-12.

Bush, C., Mullis, R, & Mullis, A. (1995). Evaluation: An afterthought or an integral part of program development. Journal of Extension [On-line]. 33(2). Available at: http://www.joe.org/joe/1995april/a4.html

Curnan, S. P., & LaCava, L. A. (2000). Getting ready for outcome evaluation: Developing a logic model. Community Youth Development Journal, 16 (1).

Hatry, H., van Houten, T., Plantz, M. C., & Greenway, M. T. (1996). Measuring program outcomes: A practical approach. Alexandria, VA: United Way of America.

Logic Model. Madison, WI: University of Wisconsin Extension. Retrieved June 12, 2001 from the World Wide Web: http://bluto.uwex.edu/ces/pdande/PDFs/logicmodel.pdf

Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. Sage Publications: Thousand Oaks, CA.

Pratt, C. C., McGuigan, W. M., & Katzev, A. R. (2000). Measuring program outcomes: Using retrospective methodology. The American Journal of Evaluation, 21(3), 341-349.

Rockwell, S. K., & Kohn, H. (1989). Post-then-pre evaluation: Measuring behavior change more accurately. Journal of Extension [On-line]. 27(2). Available at: http://www.joe.org/joe/1989summer/a5.html

Taylor-Powell, E. (2000). The LOGIC Model: Program performance framework. Providing Leadership for Program Evaluation Conference, Vail, CO June 2000. Madison, WI: University of Wisconsin Extension.

Taylor-Powell, E., & Renner, M.(2000). Collecting evaluation data: End of session questionnaires. University of Wisconsin Extension: Madison, WI.