The Journal of Extension - www.joe.org

February 2016 // Volume 54 // Number 1 // Research In Brief // v54-1rb1

How Do Mode and Timing of Follow-Up Surveys Affect Evaluation Success?

Abstract
This article presents the analysis of evaluation methods used in a well-designed and comprehensive evaluation effort of a significant Extension program. The evaluation data collection methods were analyzed by questionnaire mode and timing of follow-up surveys. Response rates from the short- and long-term follow-ups and different questionnaire modes by occupational categories also were examined. Overall, the electronic questionnaire mode and 2-month follow-ups yielded significantly higher response rates. The findings have implications for meaningfully evaluating Extension programs operating with limited resources. The recommendations are useful to Extension educators who need to decide how to capture program outcomes but have limited resources.


Vikram Koundinya
Evaluation Associate
vkoundinya@wisc.edu

Jenna Klink
Senior Evaluation Specialist
jlklink@wisc.edu

Philip Deming
Evaluation Assistant
pjdeming12@gmail.com

Andrew Meyers
Evaluation Associate
atmeyers88@gmail.com

Kevin Erb
Conservation Professional Training Coordinator
kevin.erb@ces.uwex.edu

Environmental Resources Center
University of Wisconsin–Extension
Madison, Wisconsin

Introduction

Evaluation has been integral to Extension programming as a means of documenting program outcomes and impact (Lamm, Israel, & Diehl, 2013). With reduced state and federal funding, Extension organizations are increasingly recognizing the importance of using evaluation data to demonstrate program value (McClure, Fuhrman, & Morgan, 2012). Further, the need for evaluating Extension programs with limited resources is becoming more common. Current budget cuts coupled with expectations to rigorously evaluate programs (Tobin, Thomson, Radhakrishna, & LaBorde, 2012) and a lack of evaluation knowledge among most Extension educators (Bailey & Deen, 2002; Ghimire & Trechter, 2012 [as cited in Ghimire & Martin, 2013]; Jayaratne, Lyons, & Palmer, 2008) make it difficult to meaningfully measure program outcomes. As a result, Extension educators do not contribute significantly to Extension's evaluation efforts (Holz-Clause, Koundinya, Franz, & Borich, 2012). To help Extension educators contending with inadequate resources effectively evaluate their programs, experiences gleaned from well-designed and comprehensive evaluation efforts are needed.

The evaluation unit at the Environmental Resources Center of University of Wisconsin–Extension, educational training specialists, and faculty at University of Wisconsin–Extension developed and implemented a comprehensive evaluation of a manure expo program. Different from traditional field days that provide a range of topics to a diverse audience, expos were designed to provide in-depth information on a specific management topic (DeJong-Hughes, Erb, & Everett, 2011). The North American Manure Expo has achieved impressive outcomes over the years (Deming, Meyers, & Klink, 2014; Klink & Meyers, 2013). This all-day event was started in 2001 and has been offered annually since 2005 in different locations, including Iowa, Michigan, Minnesota, Missouri, Nebraska, Ohio, Pennsylvania, Wisconsin, and Ontario. This article presents findings from the analysis of evaluation methods used for the 2012 North American Manure Expo and recommendations that can be used by Extension educators who need to decide how to capture program outcomes with limited resources and/or are not knowledgeable in evaluation theory or practice.

The 2012 North American Manure Expo was held in August near Sauk City, Wisconsin, and attended by an estimated 1,000 people from 23 states of the United States as well as Canada, Brazil, and France. The Expo focused on the latest manure management technologies and research useful to commercial manure applicators, farmers, environmental professionals, agency staff, and other interested people. The educational information was presented through equipment demonstrations and seminars/classes focused on manure management.

Purpose and Objectives

The purpose of the study reported here was to analyze the data collection methods used in the evaluation of the Expo to assist educators who have to conduct program evaluations with limited resources and/or are not knowledgeable in evaluation theory or practice. The specific objectives of the study were

  • to determine whether the paper and electronic questionnaire modes yield significantly different response rates,
  • to determine whether the 2-month and 10-month follow-up surveys yield significantly different response rates,
  • to determine whether provision of an email address with contact information differs significantly with occupational category,
  • to determine whether paper and electronic questionnaire modes yield significantly different response rates within each major occupational category, and
  • to determine whether the 2-month and 10-month follow-up surveys capture considerably different outcome measures.

Methods

Evaluators and educators at the University of Wisconsin–Extension established the face validity and content validity of the evaluation questionnaires. Follow-up surveys were sent to participants 2 months and 10 months after the Expo to capture the short-term and medium-term outcomes of the event. Paper and electronic questionnaire modes were used for both follow-ups. Expo participants were asked to provide their contact information on the day of the event; those who provided an email address were sent electronic surveys, and those who did not were mailed paper surveys. Participants to whom emails bounced back as undeliverable also were sent paper copies. The Dillman method of an initial survey send-out followed by three reminders to nonrespondents to maximize response rate (Dillman, 2007) was used for both time periods and survey modes (electronic and paper). For paper surveys, reminders 1 and 3 were postcards asking respondents to complete and return the survey, and reminder 2 was a re-send of the survey itself. Stamped return envelopes were included with the paper surveys. For online surveys, all four send-outs consisted of an email (subject heading and message modified over time) that included a link to the survey. The electronic survey was administered by using Qualtrics software, and data from the paper surveys were entered into Qualtrics.

Two analysis methods were used. For one method, the sample was restricted to respondents who completed both the 2-month and 10-month follow-up surveys, whereas the other method included full samples from both the follow-ups without pairing the responses. Data were analyzed using IBM SPSS version 22.

Results

Objective 1: Determine Whether the Paper and Electronic Questionnaire Modes Yield Significantly Different Response Rates

Chi-square analysis revealed that the electronic survey mode yielded an overall significantly higher response rate (42%) than the paper survey mode (35%). With regard to the two follow-up time points, the electronic mode yielded a significantly higher response rate than the paper mode at 2 months but not at 10 months (Table 1), indicating that the overall difference in response rates is more influenced by the disparity at 2 months.

Table 1.
Comparison of Response Rates on the 2- and 10-Month Follow-Up Surveys
Both follow-ups combined* 2-month follow-up** 10-month follow-upNS
Paper mode Electronic mode Paper mode Electronic mode Paper mode Electronic mode
Sample size 508 807 253 403 255 404
Number of responses 177 337 99 205 78 132
Response rate in percentage 35% 42% 39% 51% 31% 33%
* p ˂ 0.05. ** p ˂ 0.01. NS = Nonsignificant.

Objective 2: Determine Whether the 2-Month and 10-Month Follow-Up Surveys Yield Significantly Different Response Rates

Chi-square analysis showed that, overall, the 2-month follow-up yielded a significantly higher response rate (46%) than the 10-month follow-up (32%). The same trend was observed when looking individually at the paper and electronic modes, with the 2-month follow-ups yielding significantly higher response rates than the 10-month follow-ups, and the difference was more pronounced for the electronic mode than for the paper mode (Table 2).

Table 2.
Comparison of Response Rates on the Paper and Electronic Questionnaire Modes
Both modes combined*** Paper mode* Electronic mode***
2-month 10-month 2-month 10-month 2-month 10-month
Sample size 656 659 253 255 403 404
Number of responses 304 210 99 78 205 132
Response rate in percentage 46% 32% 39% 31% 51% 33%
* p ˂ 0.05. *** p ˂ 0.001.

Objective 3: Determine Whether Provision of an Email Address with Contact Information Differs Significantly by Occupational Category

Livestock farmers, commercial manure applicators, agency staff, and exhibitors formed a vast majority (around 90%) of respondents who provided any contact information. A chi-square analysis indicated that the percentages of people in each occupational category providing both email and mailing addresses differed significantly (p ˂ 0.001).

Of the Expo participants who provided an email address for contact information, 31% were livestock farmers, followed by 24%, 19%, and 14% that were agency staff, exhibitors, and commercial manure applicators, respectively. The same occupational categories constituted 54%, 3%, 3%, and 38% of participants who provided only a mailing address for contact information (Table 3).

Table 3.
Distribution of Occupational Categories by Type of Contact Information Provided
Occupation Provided email and mailing addresses (electronic mode) Provided mailing address only (paper mode)
Livestock farmer*** 31% (n = 126) 54% (n = 137)
Agency staff*** 24% (n = 97) 3% (n = 7)
Exhibitor*** 19% (n = 78) 3% (n = 8)
Commercial manure applicator*** 14% (n = 58) 38% (n = 96)
Total 100% (n = 404) 100% (n = 255)
*** p < 0.001.

Objective 4: Determine Whether Paper and Electronic Questionnaire Modes Yield Significantly Different Response Rates Within Each Major Occupational Category

Chi-square tests were computed to determine whether response rates differed between the two major occupational groups between the 2-month and 10-month follow-ups. The response rate of commercial manure applicators on the 2-month follow-up was significantly higher for the electronic mode (45%) than for the paper mode (25%), whereas no such significant difference was observed on the 10-month follow-up (Table 4). For livestock farmers, the response rate on the 10-month follow-up was significantly higher for the paper mode (31%) than for the electronic mode (18%), whereas no such statistically significant variation was observed on the 2-month follow-up (Table 4).

Table 4.
Comparison of Response Rates by Questionnaire Mode at Both Follow-Up Time Points for Commercial Manure Applicators and Livestock Farmers
Commercial manure applicator Livestock farmer
2-month* 10-month 2-month 10-month*
Paper mode Electronic mode Paper mode Electronic mode Paper mode Electronic mode Paper mode Electronic mode
Sample Size 96 58 96 58 137 126 137 126
Number of responses 24 26 26 19 42 44 42 23
Percentage of sample 25% 45% 27% 33% 31% 35% 31% 18%
* p ˂ 0.05.

Objective 5: Determine Whether the 2-Month and 10-Month Follow-Up Surveys Capture Considerably Different Outcome Measures

The 2-month follow-up survey captured outcomes related to sharing knowledge, learning/implementing nutrient management technologies, and making new business connections equally as well as the 10-month follow-up. However, outcomes related to purchasing and selling equipment and creating as-applied maps using GPS were captured better on the 10-month follow-up (Table 5). There was a difference of 27 percentage points between the 10- and 2-month follow-ups on the percentage of respondents purchasing equipment (8% and 35%). Similarly, percentage point differences of 16 and 12 were observed on the outcomes related to selling equipment and creating as-applied maps using GPS, respectively. This difference was merely 0–3 percentage points on the other three variables (Table 5).

Relatively fewer respondents reported money and time saved because of the Expo, but more was reported over time. After 2 months, five attendees reported an average of 17 hr saved, whereas at 10-months, 15 attendees reported an average of 69 hr saved. Similarly, at 2 months, three attendees reported an average of $4,400 saved, whereas at 10-months, eight attendees reported an average of $6,275 saved.

Table 5.
Comparison of Evaluation Outcome Measures on the Two Follow-Up Surveys
Outcome 2-month follow-up 10-month follow-up n
Frequency % Frequency %
Made a new business connectiona 48 67 48 67 72
Shared gained knowledge with others 129 95 126 93 136
Learned about/implemented nutrient management technologies/practicesb 59 64 56 61 92
Created as-applied maps using GPS 0 0 7 12 58
Sold additional equipment/service 11 34 16 50 32
Purchased equipment, product, or service seen at the Expo 5 8 22 35 63
Note. n includes participants who responded to both the follow-ups. Respondents who marked "N/A" were excluded from analysis for each item.
aThe relevant question was worded a little differently on the two follow-up surveys:
2-month survey: Got lead(s) for further business
10-month survey: Made a new business connection with someone you met at Expo
bThe relevant question was worded a little differently on the two follow-up surveys:
2-month survey: Learned about technologies related to nutrient management, such as GPS, sensor, load cells
10-month survey: Implemented nutrient management technologies/practices

Conclusions

Five conclusions were drawn on the basis of the findings from this study:

  1. The electronic questionnaires yielded significantly higher response rates than the paper surveys on the 2-month follow-up survey, whereas no significant difference existed on the 10-month follow-up survey.
  2. The 2-month follow-up survey yielded significantly higher response rates for both the paper and electronic modes as compared to the 10-month follow-up.
  3. More blue-collar workers (commercial manure applicators and livestock farmers) provided a mailing address only rather than both a mailing address and an email address as contact information, whereas almost all exhibitors and agency staff provided both a mailing address and an email address.
  4. Response rate was lowest among livestock farmers electronically at the 10-month follow-up.
  5. The 10-month follow-up captured considerably better outcomes as compared to the 2-month follow-up on behavioral variables that involve additional deliberation before implementing the behavior, such as investing in or divesting of equipment.

Recommendations

The following recommendations are based on the findings from the study and the perspective of the evaluation unit that carried out this comprehensive evaluation effort. These recommendations should help guide Extension educators who are planning evaluations with limited resources.

  1. Clearly define evaluation outcomes and what matters to the users of your evaluation.
    1. Consider administering a short-term follow-up if you want to capture outcomes related to knowledge, learning, or behaviors that may not require considerable investments of money and time.
    2. Consider doing a more long-term follow-up if you want to capture outcomes related to behaviors such as purchasing and selling or implementing new technology that require considerable investments of money and/or time.
  2. If you need to choose one time point for follow-up and expect outcomes at multiple time points but know of no obvious preference among users of your evaluation, consider that response rate is likely to be higher at more short-term follow-ups.
  3. Clearly understand the audience, and select the survey mode accordingly.
    1. In our case, sending only electronic surveys (cheaper than mailed surveys) would have been acceptable for agency staff and exhibitors, but we would have missed over half of livestock farmers and commercial manure applicators (who did not provide an email address).
    2. Consider an electronic survey for a short-term follow-up if you are facing resource limitations and need to choose one method but do not have a clear indication of mode preference.

Caution is advised to not generalize these recommendations to all Extension programs operating with limited evaluation resources. However, these recommendations can provide direction for considering the various aspects needed to implement a meaningful program evaluation with limited resources.

Limitations

  1. Researchers were not able to randomly assign Expo participants to mail and email groups. This study involved comparisons of self-selected groups.
  2. Not all Expo participants provided contact information, and the recommendations herein were made on the basis of those who provided contact information. This study did not address possible threat to internal validity from coverage bias.
  3. There was no control group of nonparticipants to ensure that the outcomes reported for Objective 5 were caused or influenced by the Expo and not by extraneous factors. However, the outcomes/changes recorded from the 2- and 10-month follow-ups most likely can be credited to the Expo in the sense that the survey questions were asked in this way (italics for emphasis): "Because of attending the 2012 Expo, did you . . . ?"

References

Bailey, S. J., & Deen, M. Y. (2002). A framework for introducing program evaluation to Extension faculty and staff. Journal of Extension [Online], 40(2) Article 2IAW1. Available at: http://www.joe.org/joe/2002april/iw1.php

DeJong-Hughes, J., Erb, K., & Everett, L. (2011). Factors of success for large agricultural field events. Journal of Extension [Online], 49(2) Article 2TOT4. Available at: http://www.joe.org/joe/2011april/tt4.php

Deming, P., Meyers, A., & Klink, J. (2014, April). 2012 Manure Expo: Impacts on attendees 10 months after the event. Retrieved from http://www.uwex.edu/erc/doc/Eval/FullReport2012ManureExpoImpactsPairingFollowUps.pdf

Dillman, D. A. (2007). Mail and Internet surveys. The tailored design method (2nd ed.). 2007 update with new internet, visual, and mixed-mode guide. Hoboken, New Jersey: John Wiley & Sons, Inc.

Ghimire, N. R., & Martin, R. A. (2013). Does evaluation competence of Extension educators differ by their program area of responsibility? Journal of Extension [Online], 51(6) Article 6RIB1. Available at: http://www.joe.org/joe/2013december/rb1.php

Holz-Clause, M. S., Koundinya, V., Franz, N. K., & Borich, T. O. (2012). Employee job autonomy and control in a restructured organization. International Journal of Agricultural Management and Development, 2(4), 277–283. Retrieved from http://www.ijamad.com/2%284%29/IJAMAD7-2%284%29.pdf

Jayaratne, K. S. U., Lyons, A. C., & Palmer, L. (2008). A user-friendly evaluation resource kit for Extension agents delivering financial education programs. Journal of Extension [Online], 46(1) Article 1TOT3. Available at: http://www.joe.org/joe/2008february/tt3.php

Klink, J., & Meyers, A. (2013, January). 2012 Manure Expo. Follow-up survey results. Retrieved from http://www.uwex.edu/erc/doc/Eval/ImpactBrief2012ManureExpo2MonthFollowUp.pdf

Lamm, A. J., Israel, G. D., & Diehl, D. (2013). A national perspective on the current evaluation activities in Extension. Journal of Extension [Online], 51(1) Article 1FEA1. Available at: http://www.joe.org/joe/2013february/a1.php

McClure, M. M., Fuhrman, N. E., & Morgan, A. C. (2012). Program evaluation competencies of Extension professionals: Implications for continuing professional development. Journal of Agricultural Education, 53(4), 85–97. doi: 10.5032/jae.2012.04085.

Tobin, D., Thomson, J., Radhakrishna, R., & LaBorde, L. (2012). Mixed-mode surveys: A strategy to reduce costs and enhance response rates. Journal of Extension [Online], 50(6) Article 6TOT8. Available at: http://www.joe.org/joe/2012december/tt8.php