February 2007 // Volume 45 // Number 1 // Tools of the Trade // 1TOT3

Previous Article Issue Contents Previous Article

Conducting Program Evaluations Using the Internet

Abstract
Program evaluations are becoming a more important responsibility for most Extension professionals. Despite an abundance of supporting resources, many Extension educators still fail to conduct meaningful evaluations of their programs, presumably because of time constraints and doubts about the quality of input received from evaluations. Web-based evaluations may be a tool to help educators conduct evaluations that are time-efficient and provide better results. Here I discuss my experience with Web-based evaluations and compare their advantages and disadvantages with traditional pen and paper evaluations.


Ben C. West
National Outreach Coordinator
The Berryman Institute
Mississippi State, Mississippi
benw@cfr.msstate.edu


Background

More and more, Extension professionals must conduct evaluations of their educational programs. Administrators often must demonstrate the value of Extension, and they in turn expect Extension educators to produce reliable metrics about program benefits, outcomes, and impacts (O'Neill, 1998; O'Neill & Richardson, 1999; Radhakrishna & Martin, 1999; Bailey & Deen, 2002).

Despite information in JOE articles and in-service training, many Extension professionals do not conduct meaningful evaluations of their programs. I have witnessed countless Extension programs that received little or no evaluation. In instances where evaluations are conducted, the paper evaluation sheets all too often reside on a shelf or in a briefcase without being compiled or analyzed. Frequently, evaluation results are not communicated to interested parties.

Many reasons exist to explain why some are reluctant to conduct evaluations, including limitations of time and resources, and inadequate knowledge of evaluation methods (Chapman-Novakofski, Boeckner, Canton, Clark, Keim, Britten, & McClelland, 1997). Time constraints, especially when combined with the perception that evaluations produce little useful information, often result in program evaluation being placed near the bottom of a long list of responsibilities.

To help Extension professionals better incorporate evaluations into their programs, they need a tool to administer evaluations that 1) requires little time and effort and 2) yields meaningful results.

Technology to the Rescue

The use of Internet surveys is becoming more popular in the arena of social science research (O'Neill, 2004), but few Extension educators have begun using this technology to conduct evaluations of their programs. About a year ago, I stopped conducting pen and paper evaluations at the conclusion of my programs and instead began administering Web-based evaluations. I have found this technology to be easy to use, affordable, and quite effective.

Sometime within the week following a program, I send an e-mail to all participants thanking them for their participation and asking for their input via a simple online survey, to which I provide a link. To employ this strategy, one obviously must have an email list of participants, which I collect during the program. Other options to deploy the survey also exist, however, such as including a link to your evaluation on a Web site.

Extension educators have two basic choices to deploy Web-based evaluations: 1) consult with a Web designer, either within the campus system or externally, or 2) use one of the many commercial survey services. After considering my needs and investigating options, I subscribed to the service offered at http://www.surveymonkey.com/. This service makes it easy for the user to create surveys, invite people to participate, and summarize results.

During the past year in which I have used this service, I have conducted numerous needs assessments and program evaluations. During that time, I have noted advantages and disadvantages.

Advantages

Data Entry and Analyses

One of the primary reasons, I believe, educators do not implement meaningful evaluations is simply a matter of time: it takes a lot of time and effort to compile and analyze evaluation data. With Web-based evaluations, much of this effort is eliminated. Data are instantly recorded in a database, and simple statistics (means, frequencies, etc.) are automatically produced. If one desires more sophisticated analyses, the user can download the raw data into a database, spreadsheet, or statistical software package.

Quality of Responses

Unfortunately, many program participants provide little information on evaluations handed out at the conclusion of a program, particularly with regard to open-ended questions. To test the ability of paper versus Web-based evaluations to elicit detailed responses to open ended questions, I examined evaluation responses to several 3-day workshops conducted from June 2004 to June 2005 and June 2005 to June 2006. During the earlier time period, I used paper evaluations at the conclusion of the workshop, and during the latter I used Web-based evaluations administered within a week after the conclusion of the workshop. All of the workshops covered similar topics for the same clientele.

For the purposes of simplicity, I examined only the number of words used to respond to the question "What did you like most about the workshop?" Clearly, the quantity of response to the Web-based evaluations greatly surpassed that of the paper evaluations (Table 1). And, although I only list the data for a single question, I have noticed similar trends for all open-ended questions in my evaluations.

Table 1.
Responses to the Question "What Did You Like Most About the Workshop?"

Evaluation Method # Workshops # Responses # Words/Response
Pen and Paper 5 78 7.6
Web-Based 4 87 23.0

I believe there are two primary reasons for more detailed responses in Web-based evaluations. First, many people are now more comfortable and efficient using a computer to do their writing than they are with a pen and paper and thus respond to open-ended questions in more detail with the computer. Second, using the Web-based approach, participants can choose the most convenient and opportune time to complete the evaluation and do so in the convenience of their own home or office. They are not forced to hurriedly complete an evaluation at the end of a long program and with a myriad of distractions around them.

Disadvantages

Response Rates

With paper evaluations, response rates generally are 100% or nearly so. "You must complete this evaluation before you leave!" is a common mantra of Extension educators. When I ask people to go online and complete an evaluation several days after the program, I inevitably lose some, but response has averaged about 90%. Moreover, one can probably do some things to maximize response: I recently began informing participants, at the conclusion of a program, that I would be sending an e-mail inviting them to complete a Web-based evaluation and asking for their commitment to do so. Since starting this tactic, my response rates have been 100%.

Audience Capabilities

My clientele primarily consist of professionals who have convenient access to computers and reliable Internet connections. As such, I know they can easily go online and complete Web-based evaluations. Other clientele may not enjoy easy access to these resources and thus be unable or unwilling to complete an online evaluation. Extension educators must understand their audience and their capabilities before making a wise decision about the use of Web-based evaluations.

Conclusions

I have found Web-based evaluations to be an excellent tool. The technology is powerful, affordable, easy to use, and reliable. I can more easily compile and share my evaluation results with cooperators, peers, and administrators. I encourage Extension professionals to try this approach to program evaluation, while also being sensitive to the needs and capabilities of their clientele.

References

Bailey, S. J., & Deen, M. Y. (2002). A framework for introducing program evaluation to Extension faculty and staff. Journal of Extension [On-line], 40(2). Available at: http://www.joe.org/joe/2002April/iw1.html

Chapman-Novakofski, K., Boeckner, L. S., Canton, R., Clark, C. D., Keim, K., Britten, P., & McClelland, J.. (1997). Evaluating evaluation--What we've learned. Journal of Extension [On-line], 35(1). Available at: http://www.joe.org/joe/1997february/rb2.html

O'Neill, B. (1998). Money talks: Documenting the economic impact of extension personal finance programs. Journal of Extension [On-line], 36(5). Available at: http://www.joe.org/joe/1998october/a2.html

O'Neill, B., & Richardson, J. G. (1999). Cost-benefit impact statements: A tool for Extension accountability. Journal of Extension [On-line], 37(4). Available at: http://www.joe.org/joe/1999august/tt3.html

O'Neill, B. (2004). Collecting research data online: Implications for Extension professionals. Journal of Extension [On-line], 42(3). Available at: http://www.joe.org/joe/2004june/tt1.shtml

Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability training needs of Extension agents. Journal of Extension [On-line], 37(3). Available at: http://www.joe.org/joe/1999june/rb1.html