The Journal of Extension - www.joe.org

December 2011 // Volume 49 // Number 6 // Feature // v49-6a8

Factors Influencing Participant Perceptions of Program Impact: Lessons from a Virtual Fieldtrip for Middle-School Students

Abstract
Participant perceptions of program effectiveness and impact are undoubtedly a popular focus of Extension program evaluations. However, the effects of participants' characteristics and contextual variables on program perceptions and how the resulting data can be used for program improvement are less explored in evaluation studies. Using data from the evaluation of an electronic fieldtrip as an exemplar case, this article describes a study that employed the method of linear regression to examine the influences of demographic variables and program contextual factors on participants' perceptions of program effectiveness. The implications for Extension evaluation and programming are also discussed.


Omolola A. Adedokun
Assessment Coordinator
Discovery Learning Research Center
oadedok@purdue.edu

Loran C. Parker
Assessment Specialist
Discovery Learning Research Center
carleton@purdue.edu

Jamie Loizzo
Project Manager
Department of Agricultural Communication
Jloizzo@purdue.edu

Wilella D. Burgess
Managing Director
Discovery Learning Research Center
wburgess@purdue.edu

J. Paul Robinson
SVM professor of Cytomics
School of Veterinary Medicine
jpr@flowcyt.cyto.purdue.edu

Purdue University
West Lafayette, Indiana

Introduction

In these days of increasing demand for impact documentation and accountability from funding agencies, the most frequently asked question for Extension professionals is "What happened as a result of your program?" (Radhakrishna & Martin, 1999). In the process of providing viable answers to the question of program impact and effectiveness, Extension educators and program planners are becoming increasingly aware of the importance of planning and conducting evaluations to determine the extent to which programs achieve their stated objectives and expected outcomes.

As noted by Rennekamp and Arnold (2009), ever since the publication of the September 1983 issue of the Journal of Extension dedicated to program evaluation, the Extension community has witnessed advances in program evaluation planning and development, evaluation implementation, and utilization. Contrary to prior practice, Extension educators now commonly include evaluation at the early stages of program planning and development (e.g., the use of logic models for planning evaluation) and conduct formative and summative program evaluation to demonstrate to stakeholders, especially funding agencies, that their programs achieved the desired outcomes and deserve continued funding.

Among other evaluation concerns, Extension educators and program planners are interested in examining the impact of their programs on the intended public or audience. Indeed, participant perceptions of program effectiveness and impact have become a popular focus of Extension program evaluations. While the increasing interest in evaluating participants' perceptions of programs is commendable, a notable limitation is that Extension evaluation studies seldom explore the effect of participants' characteristics and contextual variables on program perceptions, and rarely examine how the resulting data can be used for program improvement.

For example, informal and semi-formal education programs, including real and virtual visits to museums, science centers, zoos, universities, and other educational institutions are designed for specific audiences to convey particular educational messages. How can Extension educators tell what factors influence audience perceptions of these programs? Using data from the evaluation of an electronic fieldtrip as an exemplar case, this article describes a study that employs the method of linear regression to examine the influences of demographic variables and program contextual factors on participants' perceptions of program effectiveness. Understanding these factors will facilitate the creation and implementation of programs that are valued by the public.

Methods

Program Description

Teachers and school administrators are often challenged by the cost and logistics of field trips. Electronic Field Trips (EFTs) have been identified as effective avenues to reduce the challenges associated with fieldtrips (Klemm &Tuthill, 2003; Placing & Fernandez, 2001). ZipTrips was designed to provide middle school students with the opportunity to see and interact with university scientists and their exciting work, without leaving their school. The goals of zipTrips include: increasing student understanding of science, research, and career opportunities; enhancing student interest in science; and making university researchers and labs accessible to students and their teachers. Although there are separate interactive zipTrips experiences for 6th, 7th, and 8th grade, this article focuses on the 6th grade experience only. The EFT was piloted with a select audience before it was released for public viewing; both the pilot and public EFTs were approximately 45 minutes in duration and consisted of four core elements: an in-studio audience, live interaction with scientists, pre-recorded segments, and live experiments.

Data Description

Data came from student (N= 409; male = 55%; female =45%) responses to a pre-post participation survey soliciting information about their demographic characteristics, attitudes, and interests in science. The post-participation survey also included questions about student understanding of the content and their perceptions of the impact of the program. Items regarding student interests and attitude towards science were adapted from Jarvis and Pell (2002). The teachers were also asked to complete a post-participation survey soliciting information about their perceptions of the program.

Data Analysis

Data were coded and analyzed using the Statistical Package for the Social Sciences (SPSS version 18). The analysis occurred in two stages: first, descriptive statistics (frequencies and percentages) were used to examine student perceptions of the program. Table 1 suggests that students hold positive and favorable perceptions of the program. For example, about 78% of the students reported that they learned a lot about what scientists do from watching zipTrips, 76% liked seeing the scientists in the program, and 76% viewed the program as interesting.

Table 1.
Student Perceptions of ZipTrips

ItemsS.A./A.
F (%)
D./S.D.
F (%)
I learned a lot about what scientists do from watching zipTrips312
(77.6)
87
(21.6)
I really liked seeing the scientists in the live zipTrips show305
(75.9)
95
(23.6)
Seeing the live zipTrips program was fun283
(70.0)
118
(29.2)
Seeing the live zipTrips program was interesting310
(76.4)
94
(23.2)
I would like to go on another zipTrips program292
(72.5)
105
(26.1)
Note: F= frequency; %= Percentage; A.=Agree; S.A.=Strongly Agree; D.=Disagree; S.D.=Strongly Disagree

The second stage of the analysis involved the examination of the effects of participants' characteristics and program contextual variables on their perceptions of zipTrips. Specifically we developed and tested the regression model, Y= a+ b 1X1+b2X2+b3X3+b4X4+b5X5+b6X6+e.

Where:

Y= Perceptions of program impact;

X1=Interest in science;

X2= Perceived importance and relevance of science;

X3= Understanding of program content;

X4= Gender;

X5= School type (i.e., public versus private); and

X6= Whether or not school participated in pilot viewing.

The regression coefficient for each variable is represented by the corresponding b, and e refers to the unexplained error term, i.e., other factors outside of the model. The variables in the model are further described as follows.

  • Perceptions of program effectiveness: The dependent variable of interest was measured by a summated rating scale consisting of participants' responses to five questions included in the post-participation survey: "I really liked seeing the scientists in the live zipTrips show," "Seeing the live zipTrips program was fun," "Seeing the live zipTrips program was interesting," and "I would like to go on another zipTrips program." The reliability of the five items as measured by Cronbach's alpha was 0.87. Response categories for the items included in this variable and other summated rating scales described below ranged from "strongly disagree" =1 to "strongly agree" = 4.

  • Interest in science: Student' interest in science was measured by summing their responses to five items included in the pre-participation survey: "I like to study science in school," "I like watching science programs on TV," "I like learning about science," "I think I could be a scientist," and "People just like me can become scientists." The reliability of the five items was 0.73.

  • Understanding of program content: The extent to which students understood the content of the fieldtrip was measured by a summated rating scale consisting of four items: "Scientists use observations and experiments to help answer questions and solve problems," "Even though some animals look different from us, we can have similar body systems," "I learned from zipTrips that scientists can work outdoors as well as in laboratories," and "I learned a lot about what scientists do from watching zipTrips." The reliability of the five items was 0.65.

  • Perceived relevance of science: Student perception of the societal relevance and importance of science was measured by a summated rating scale consisting of their responses to four items included in the pre-participation survey: "Science affects everyone including me," "Science is an important subject," "Every day I use things made possible by science," and "Science can make our lives better." The reliability of these items was 0.64.

  • Gender as used here was a dummy coded variable with boys = 0 and girls = 1.

  • School type: Public schools were coded as 1, and private schools were coded as 0.

  • Pilot/non-pilot: Schools that participated in the pilot study were coded 1, and schools that did not participate in the pilot study were coded 0.

A forced method of regression in which all the variables are entered at the same time was used to ensure that random variations in the data did not influence the regression estimates (Field, 2005).

Table 2 contains the standardized (and un-standardized) regression weights and standard error values.

Table 2.
Regression Coefficients of Variables in the Model

Variables bSEB
Interest in science0.150.05 0.15**
Perceived importance and relevance of science0.060.07 0.04
Understanding of program content0.670.07 0.43***
Gender0.330.26 0.05
Public versus private school -0.980.45 -0.09*
Pilot versus non-pilot school1.490.35 0.18***
Note: b= un-standardized coefficient; SE = Standard Error; B = standardized coefficients; *= p<.05; ** =p<.001 and ***= p<.000.

The six independent variables combined explained 32% (R2=0.32) of the variability in student perceptions of program effectiveness. Also, the effect size (i.e., Cohen's f2) for the model, 0.49, indicates a good model fit. The analysis revealed that four of the six factors included in the model have statistically significant relationships with student perception of the program. Each of the four significant factors, their relationships to the dependent variable, and lessons learned are discussed below.

  • Not surprisingly, we found that student interest in science is positively related to their perceptions of the zipTrips program (β =0.15, p<.05); that is, the higher student interest in science, the more favorable their perception of the program. This finding suggests that the program is successful at reaching students who are already interested in science, but also challenges the project management team to devise strategies for program improvement that ensure that the program also appeals to those students who may not have previously developed an interest in science. The goal of the program is not only to enhance student existing interest in science, but also to stimulate new interests in students who may not have had opportunities to develop interest in science and science careers. We learned that modifications to program content and implementation might be necessary to ensure that the benefits of the program are not limited to students who are already interested in science and science careers.

  • Understanding of program content was a significant positive factor influencing student perception of the program (β = 0.43, p<.05); that is, students, who understood the "take home message" of the program reported more favorable perceptions. This is not surprising given that learners tend to attach greater value or appreciation to programs and experiences that reinforce their existing knowledge or experiences. As David Ausubel famously synthesized his theory of learning: "The most important single factor influencing learning is what the learner already knows" (Ausubel, 1968, p. iv). This result highlights the need to create programs that enhance the educational experiences of participants by building on their existing knowledge and experiences, as well as the need to consider learners' prior experiences and knowledge when devising evaluation strategies or conducting evaluation of education and (or) Extension programs. Controlling for learners' prior learning experiences and knowledge is often a challenge for Extension professionals who work with diverse audiences that may have widely differing experiences and existing knowledge bases. However, program planners must be able to differentiate their approach to educational delivery to ensure that participants can connect the "take home message" of Extension programs to their previous or current lived experiences.

  • Students in private schools had more positive perceptions of the program than their counterparts in public schools (β = -0.09, p<.05). While we do not claim to have a perfect understanding of why private school students reported more favorable perceptions than their public school peers, a possible explanation is that private school students and teachers may have more experience with field trips and special events due to fewer budget constraints and (typically) higher parental involvement and better access to community and educational resources. For example, higher parental involvement may allow private schools to participate in more field trips because adult chaperones are available to supervise and help plan the event. This experience may have made the zipTrips EFT a more "normal" event for students. Similarly, Falk and colleagues (1978) have shown that adjustment to new settings can interfere with the effectiveness of fieldtrips for elementary school students. If students regard the EFT as a normal educational experience rather than a novel event, then they are more likely to view the program as worthwhile and effective.

  • Students in schools that participated in the pilot study had higher positive perceptions than students in non-pilot schools (β =.18, p<.05). Although these were not the same 6th grade students that participated in the pilot study a year before the main public debut, their teachers were the same 6th grade teachers. These pilot teachers were also involved in the development of the program; they helped to ensure that program content was in line with the Indiana State Science standards for 6th grade, and their participation was well supported by their principals and administrators. Moreover, the teachers were very familiar with the supplementary online resources. Research has shown that novel learning settings, in particular field trip sites, can inhibit student cognitive and affective gains from field trips as well as negatively affect their attitudes towards and perceptions of the learning experience (Rudman, 1994).

    • Rudman (1994) also argued that teachers can reduce the novelty effect by becoming familiar with the field trip site and related curricular resources ahead of time. Although Rudman was referring specifically to real physical field trips, we believe that her arguments are also applicable to virtual field trips. We suspect that the pilot teachers' familiarity with the program helped in orientating the students for the program and reducing the novelty factor, thereby enhancing the perceptions and experiences of their students.

    • In line with Falk and colleagues' novelty hypothesis, there is the possibility that the teachers' previous awareness and participation in the program gave them a better understanding of the program, what to expect, and how best to support student engagement in the program. Therefore, it was not surprising to observe that their students had more favorable perceptions of the program. However, the important lesson we learned was that administrator buy in and teacher involvement in the development and implementation of educational or Extension programs may be important factors influencing the extent to which outcomes are achieved as well as participants' perceptions of programs.

  • Gender and perceived importance and relevance of science did not have statistically significant effects on student perceptions of the program.

Implications for Extension Evaluation

Educators and program planners are justifiably interested in examining the impact of their programs on the intended audiences. Just as important, however, are the factors that influence how a program is perceived by the audience (Forneris, Danish, & Fries, 2009). Although the findings reported here are program specific, we believe that the results reinforce some important findings from the extant research literature on field trips (e.g., Falk, Martin, & Balling, 1978; Ruddman, 1994), as well as lessons that may prove useful to Extension professionals in program planning and evaluation. In particular, the study reaffirms the importance of considering participants' prior knowledge and (2) cultivating partnerships with teachers, administrators, and other stakeholders when designing programs and program evaluations. Taken together, the study illustrates a simple method of process evaluation and its benefits for Extension program evaluation by highlighting the critical need for Extension evaluators to combine process with outcome evaluation to gain a deeper understanding of the dynamics of their programs and identify potential areas for improvement.

In recent decades, the evaluation community has been challenged to combine both process and outcome evaluations in determining program impact (Donaldson 2001; Hennessy & Greenberg, 1999). While outcome evaluations seek to determine if desired outcomes are achieved (e.g., positive responses from participants), process evaluations investigate how (and why) program outcomes are achieved. For example, the regression analysis described above helped us to gain a deeper understanding of how pre-existing characteristics (e.g., gender, interest in science) may (or may not) influence student perceptions of program effectiveness.

If the evaluation of the program had been limited to the analysis of participants' perceptions of the program using descriptive statistics (i.e., frequency counts of the number of students that agreed or disagreed with particular aspects of the program described in Table 1), the project management team would have gained no further insight into some of the details of the dynamics of the program. However, by testing a simple regression model to understand the factors related to participants' perceptions of program effectiveness, the project management team was able to better understand the dynamics of the program and identify some areas for improvement. For example, the results suggest that modifications to program content and implementation may be necessary to ensure that the benefits of the program are not limited to students who are already interested in science and science.

In a broader sense, this article provides a simple example of how to evaluate programs for differential impacts (especially the inclusion of participants' demographic characteristics and other descriptive variables) in statistical program models. For example, the results suggested that students with prior interest in science had more favorable perceptions of the program and that gender did not influence student perceptions of program effectiveness, despite the project management team's deliberate attempts at making the program engaging for both girls and students without a strong background in science. Extension programs that seek to stimulate interest in topics such as science may test program models that include variables depicting participants' characteristics to determine if their programs are reaching targeted audiences.

Although the process evaluation described in this article uses a quantitative model, we must emphasize that that we do not argue that process evaluations be conducted solely via statistical techniques. Evaluators may also use qualitative techniques (e.g., focus group sessions and reflective journals) to understand participants' perceptions of program effectiveness and how participant characteristics and contextual factors may influence perceived program impact.

In conclusion, we offer some suggestions on how the study reported here may guide Extension practitioners in conducting process evaluations. These suggestions are directly applicable to evaluators, especially those responsible for multisite programs (e.g., county-wide programs) and programs administered to multiple sub-population groups (e.g., inner city youth versus rural youth) who may be interested in using regression models to understand differential program impact.

  • Identify and collect baseline data on factors that may influence program impact. Information regarding these factors could be gleaned from literature, personal experiences, and the knowledge of the specific program or similar programs. These variables can then be included in statistical models to examine differential program impact. Understandably, most Extension programs are small scale, and evaluators often operate under tight resources and other constraints that may hinder rigorous, extensive and expensive data collection. However, data regarding basic demographic and program characteristics (e.g., gender, indicators of socio-economic status, location, race/ethnicity, etc.) can be collected and used to enhance the understanding of program dynamics.

  • For Extensions programs that are implemented (or replicated) across multiple sites and (or) cohorts, use similar or standard, quantitative measures across sites, cohorts, or contexts. This will allow for data to be included in a pooled data (with descriptors to differentiate contextual variables such as location and cohort) for systematic program evaluation.

  • Consider using multiple items (e.g., rating scales measuring satisfaction or efficacy) to measure outcome variables, and conduct reliability analysis to determine how well items co-vary or relate with one another as well as the extent to which they can be combined to create composite variables measuring the outcomes of interest.

  • Finally, do not postpone process evaluation until you have a "perfect" data set or until the final stages of a program life. Rather, give thought to process evaluation starting from the early stages of program planning and implementation. The program logic model could be a useful tool for planning process evaluations—the logic model could aid evaluators and program planners in thinking about how a program should (or is expected) to work. (Interested readers should see Kellogg (2001) for more on program logic models.) Similarly, we make a special plea for the documentation of program characteristics and features that could help explain program dynamics and program impact.

Acknowledgement

This project was developed with partial support from the Howard Hughes Medical Institute (Grant #51006097). The contents of this article are the authors' and do not necessarily represent the views or policies of the Howard Hughes Medical Institute.

References

Ausubel, D. P. (1968). Educational psychology: A cognitive view. New York: Holt, Rinehart and Winston.

Donaldson, S. I. (2001). Mediator and moderator analysis in program development. In S. Sussman (Ed.), Handbook of program development for health behavior research and practice (pp. 470-500). Thousand Oakes, CA: Sage publications, Inc.

Falk, J. H., Martin, W. W., & Balling, J. D. (1978). The novel field-trip phenomenon: Adjustment to novel settings interferes with task learning. Journal of Research in Science Teaching, 15(2), 127-134.

Field, A. (2005). Discovering statistics using SPSS. Thousand Oaks: Sage.

Forneris, T., Danish, S. J., & Fries, E. (2009). How perceptions of an intervention program affect program outcomes. Journal of Educational and Psychological Consultation, 19: 2, 130-149.

Hennessey, M., & Greenberg, J. (1999). Bringing it all together: Modeling intervention processes using structural equation modeling. American Journal of Evaluation, 20, 3, 471-480.

Jarvis, T., & Pell, A. (2002). Effect of the challenger experience on elementary children's attitude to science. Journal of Research in Science Teaching, 39, 979-1000.

W. K. Kellogg Foundation (2001). Using Logic Models to bring together planning,evaluation, and action: Logic Model development guide. W. K. Kellogg Foundation.

Klemm, E. B., & Tuthill, G. (2003). Virtual field trips: Best practices. International Journal of Instructional Media, 30 (2), 177-194.

Placing, K., & Fernandez, A. (2001). Virtual experiences for secondary science teaching. Australian Science Teachers' Journal. 48, 1, 40-43.

Rennekamp, R. A., & Arnold, M. E. (2009). What progress, program evaluation? Reflections on a quarter-century of Extension evaluation practice. Journal of Extension [On-line], 47(3) Article 3COM1. Available at: http://www.joe.org/joe/2009june/comm1.php

Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability: Training needs of Extension agents. Journal of Extension [On-line], 37(3) Article 3RIB1. Available at: http://www.joe.org/joe/1999june/rb1.html

Rudman, C. L., (1994). A review of the use and implementation of science fieldtrips. School Science and Mathematics, 94, 3, 138-141.