August 2019
|
August 2019 // Volume 57 // Number 4 // Tools of the Trade // v57-4tt1
Linking Extension Program Design with Evaluation Design for Improved Evaluation
Abstract
We present a framework to help those working in Extension connect program designs with appropriate evaluation designs to improve evaluation. The framework links four distinct Extension program domains—service, facilitation, content transformation, and transformative education—with three types of evaluation design—preexperimental, quasi-experimental, and true experimental. We use examples from Extension contexts to provide detailed information for aligning program design and evaluation design. The framework can be of value to various audiences, including novice evaluators, graduate students, and non-social scientists, involved in carrying out systematic evaluation of Extension programs.
Introduction
Linking program design and evaluation design is critically important to conducting systematic evaluation of Extension programs. Yet a framework for making such a connection has not been presented. Instead, previous frameworks have tended to focus on either educational programs or evaluation designs. For example, Franz and Townson (2008) provided a framework for classifying educational programs through a quadrant analysis. By plotting content on the x-axis and process (delivery methods) on the y-axis and then superimposing program design domains onto four quadrants, they suggested four distinct domains of educational programming: service, facilitation, content transmission, and transformative education (see Figure 1). The four program domains mirror components of key evaluation models (Bennett, 1975; Bennett & Rockwell, 1995; Kirkpatrick, 1996) often used in Extension.
Figure 1.
Four Domains of Educational Programming
Adapted from "The nature of complex organizations: The case of Cooperative Extension," by N. Franz and L. Townson, 2008, in M. T. Braverman, M. Engle, M. E. Arnold, and R. A. Rennekamp (Eds.), Program Evaluation in a Complex Organizational System: Lessons from Cooperative Extension, pp. 5–14, Jossey-Bass, San Francisco, CA.
Other scholars have focused primarily on different evaluation designs. For example, in their seminal piece, Campbell and Stanley (1963) classified evaluation/research designs into three broad categories: preexperimental, quasi-experimental, and true experimental (see Figure 2). Later frameworks categorized evaluation designs in similar ways. For instance, researchers for Project STAR (2006) characterized designs as exploratory, descriptive, and experimental and quasi-experimental, and Di Tommaso (2015) used process and outcome as design categories in presenting evaluation designs at an AmeriCorps symposium (Figure 2).
Figure 2.
Classification of Evaluation Designs
Extension professionals use these three evaluation/research designs at various stages in their programs. Preexperimental/exploratory/process designs are often used in preliminary stages of program design, such as during needs assessments, when understanding contexts is vital to formulating better programs. Quasi-experimental/descriptive/outcome designs are used for documenting changes in program participants' knowledge, attitudes, skills, and actions (KASA). Finally, true experimental designs are used for determining causation. Both quasi-experimental and true experimental designs involve comparing two groups (treatment and control) and collecting data before and after a program.
Our purpose with this article is to present a framework to help those working in Extension link program designs with appropriate evaluation designs to improve evaluation. To create this framework, we connected the Extension program domains described by Franz and Townson (2008) with the evaluation designs commonly used in Extension program evaluation. Herein, we use description and example to elucidate these connections. For each of Franz and Townson's program domains, we (a) provide a description and (b) explain the value of using one or more of the evaluation designs proposed by Campbell and Stanley (1963) to evaluate programs in that domain. We follow that discussion with Table 1, in which we link the two concepts—program design and evaluation design—using examples from Extension contexts.
Aligning Program Design Domains with Key Evaluation Designs
According to Franz and Townson (2008), the service domain of Extension programming includes several activities carried out by Extension educators as part of their job responsibilities and in service to communities. These activities require low levels of both process and content (Franz & Townson, 2008). From the evaluation design standpoint, simple "feedback" surveys, follow-up postcards, and documentation of services offered suffice for such programming and reflect lower levels (e.g., Reaction) of Bennett's hierarchy (Bennett, 1975). For the service domain, then, a preexperimental evaluation design is appropriate in that the type of data needed can be collected via "feel good" surveys that indicate number of service activities conducted by Extension educators to develop rapport with their clientele.
Programs in the facilitation domain involve a high level of process and a low level of content (Franz & Townson, 2008). Extension educators facilitate processes by convening communities to address critical issues. For example, an Extension educator's helping community leaders organize a town hall meeting or other public forum to address health insurance literacy concerns would fall into the facilitation domain. For the facilitation domain, preexperimental design is appropriate. Extending the example, an Extension educator could use such a design to identify key stakeholders for forming a new health insurance literacy network. Simple open-ended surveys, group meetings, and collection of demographic information are appropriate for the facilitation domain.
Content transmission requires a high level of content and a low level of process (Franz & Townson, 2008). An example would be an effort in which an Extension specialist synthesizes recent research on teenage obesity and prepares a newsletter or web-based summary for teens and their parents. Programs focused on content transmission can be evaluated through the application of preexperimental and quasi-experimental designs involving short surveys, on-site observations, and/or focus group sessions, which can provide data on outcomes such as the degree to which community issues have been resolved. Surveys, for example, are useful for assessing content quality in terms of accuracy, readability, and navigability, and thereby can inform the development of more accurate and accessible information and transmission of that information to audiences.
The transformative education domain requires high levels of both process and content, with the goal being to change behaviors among program participants (Franz & Townson, 2008). For example, the work of an Extension educator who develops, delivers, and evaluates a healthful lifestyle program for senior citizens over a 3-year period to achieve long-term impact would be in the realm of transformative education. Such programs can be evaluated through the use of quasi-experimental or true experimental designs involving follow-up studies and pre- and posttests used for assessing, for example, percentage increase in KASA. Continuing with the example, true experimental designs would be appropriate to use to document change in KASA between program participants (treatment group) and nonprogram participants (control group), and such documentation then could be used in determining causation and/or the effectiveness of the healthful lifestyle program.
To demonstrate how to link program design with evaluation design, we provide examples in Table 1 by indicating the evaluation design(s), evaluation method(s), outcome(s), and indicator(s) that may follow from a sample evaluation question in each domain.
Evaluation question(s) | Evaluation design(s) | Evaluation method(s) | Expected outcome(s) | Indicator(s) |
---|---|---|---|---|
Service | ||||
What are the characteristics of people who participate in the Volunteer Income Tax Assistance program? | Preexperimental; exploratory or descriptive |
Participant data log Document review "Feel good" surveys |
Relationship development |
Number of events Number of applications processed Demographics, income levels Tax refunds |
Facilitation | ||||
Who participated in the discussion that led to creation of a health insurance network for the community? | Preexperimental | Short surveys, on-site observation, field visits | Issues resolved, community harmony, relationship building | Number of issues addressed |
Content transmission | ||||
Is the content in the Health Insurance Literacy Handbook accurate, readable, and understandable for the audience? | Preexperimental or quasi-experimental | Surveys to determine use, quality of content in terms of accuracy, readability, navigability, etc. | Accurate and timely availability of information | Increased number of users |
To what extent are program participants using the Health Insurance Literacy Handbook information? | Preexperimental or quasi-experimental |
Follow-up with information users and providers Feedback from users |
Continuous improvement | Number of users and value of information use |
Transformative education | ||||
Do participants in the Obesity Reduction Program reduce their body mass index? | Quasi-experimental; experimental (randomized control trials) |
Pretest-posttest to assess knowledge, attitudes, skills, and actions (KASA) Follow-up with participants to assess behavior change |
KASA change |
Percent change in KASA Percent change in behavior Change in social, economic, and environmental conditions |
As exemplified in Table 1, one can discern linkages across program design domain, evaluation design, data collection strategies, expected outcomes, and indicators. Understanding each of these components in the framework we have presented will help Extension professionals systematically evaluate their programs. Further, understanding both program design domains and evaluation designs will help them (a) make program outcomes more robust, (b) better understand complete program development–evaluation cycles, (c) link higher level process and content programs to advanced-level evaluation designs that generate useful evidence, and (d) guide selection of robust data collection and data analysis procedures. Franz and Archibald (2018) and Radhakrishna and Relado (2009) indicated that linking program design with evaluation design not only enhances evaluation capacity building but also helps in evaluative thinking. We believe that the proposed framework will be of value to various audiences, including novice evaluators, graduate students, and non-social scientists, in carrying out systematic evaluation of Extension programs. The framework will serve as a road map for connecting program design domains with evaluation designs to develop measurable objectives and indicators that lead to improved Extension program evaluation.
References
Bennett, C. (1975). Up the hierarchy. Journal of Extension, 13(2). Available at: https://www.joe.org/joe/1975march/1975-2-a1.pdf
Bennett, C., & Rockwell, K. (1995, December). Targeting outcomes of programs (TOP): An integrated approach to planning and evaluation. Unpublished manuscript. Lincoln, NE: University of Nebraska.
Campbell, D. T., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally.
Di Tommaso, A. (2015, September). Evaluation designs. Paper presented at the 2015 AmeriCorps State and National Symposium, Arlington, VA.
Franz, N., & Archibald, L. (2018). Four approaches to building Extension program evaluation capacity. Journal of Extension, 56(4), Article 4TOT5. Available at: https://www.joe.org/joe/2018august/tt4.php
Franz, N., & Townson, L. (2008). The nature of complex organizations: The case of Cooperative Extension. In M. T. Braverman, M. Engle, M. E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension (pp. 5–14). New Directions for Evaluation, 120. San Francisco, CA: Jossey-Bass. doi:10.1002/ev.272
Kirkpatrick, D. (1996). Great ideas revisited: Techniques for evaluating training programs. Training and Development, 50, 54–59.
Project STAR (2006). Study designs for program evaluation. Burlingame, CA: JBS International. Retrieved from http://www.pacenterofexcellence.pitt.edu/documents/study_designs_for_evaluation.pdf
Radhakrishna, R. B., & Relado, R. Z. (2009). A framework to link evaluation questions to program outcomes. Journal of Extension, 47(3), Article 3TOT2. Available at: https://www.joe.org/joe/2009june/tt2.php