April 2007 // Volume 45 // Number 2 // Research in Brief // 2RIB4

Previous Article Issue Contents Previous Article

Response Patterns: Effect of Day of Receipt of an E-Mailed Survey Instrument on Response Rate, Response Time, and Response Quality

Abstract
Are you seeking ways to improve response to e-mailed survey instruments? We examined effects of day of receipt of an e-mailed survey instrument on 1) response rate, 2) length of time lapsed in responding, and 3) quality of response. No significant differences were explained by day of receipt of an e-mailed survey instrument on response rate, response time, or response quality. Two recommendations evolved: 1) use a complement of best practices, including advanced notice and multiple follow-up to increase participation of potential nonrespondents, and 2) understand the audience's preferred modality, organizational values, communication patterns, and medium to elicit information.


Glen Shinn
Professor
Department of Agricultural Leadership, Education, and Communications
Texas A&M University
College Station, Texas
g-shinn@tamu.edu

Matt Baker
Professor & Chair
Department of Agricultural Education and Communications
Texas Tech University
Lubbock, Texas
matt.baker@ttu.edu

Gary Briers
Department of Agricultural Leadership, Education, and Communications
Texas A&M University
College Station, Texas
g-briers@tamu.edu


Professional and technical landscapes can change quickly, and it is important to understand and describe changes that affect Extension programming as they occur. Descriptive research tools provide promise for insights. Survey questionnaires are one of the most popular methods of collecting information from a target population. However, in a time when appraisals are more frequently needed, the rate of response in survey research is declining (Sheehan, 2001).

Phillips (1941) criticized mail surveys because of low response rates. Throughout the following six decades, researchers examined a myriad of techniques and their effects on response rate. Wright (2005) concluded that ". . . online survey researchers should conduct a careful assessment of their research goals, research timeline, and financial situation before choosing a specific product or service" (p. 1). Valuable best practices have been developed and proposed (Brashears, Akers, & Bullock, 2003; Bruzzone, 1999; Dillman, 2000; Dillman & Carley-Baxter, 2000; Fraze, Hardin, Brashears, Haygood, & Smith, 2003; Lindner, Murphy, & Briers, 2001; Mehta & Sivadas, 1995; Miller & Smith, 1983; Nie, Hillygus, & Erbring, 2002; Schaefer & Dillman, 1998; Sheehan & Hoy, 1999; Tse, 1998; Tse, Tse, Yin, Ting, Yi, Yee, & Hong, 1995; Walonick, (n.d.), Witmer, Colman, & Katzman, 1999; and Yun & Trumbo, 2000).

Dillman (2000), when examining mail and Internet survey methodologies, argued that "no other method of collecting survey data . . . offers so much potential for so little cost" (p. 400). Sheehan (2001), in a review of e-mail survey response rates, noted,

. . . while the number of studies that use e-mail to collect data has been increasing over the past fifteen years, the average response rate to the surveys appears to be decreasing (Table 1). On average, the 31 studies reported a mean response rate of 36.83%. The 1995/6 period showed seven studies using e-mail surveys with an average response rate of about 46%. The 1998/9 period, in contrast, showed thirteen studies using e-mail surveys with an average response rate of about 31%. (p. 7)

There is evidence that response rates continue to decrease.

While Rea and Parker (1997) found length and format have a significant effect on return, Dillman, Tortora, and Conradt (1998) reported that fancy vs. plain designs may not. Other "features" of survey instruments are not known. For example, what is the influence of the day of receipt of the instrument and does that day influence response patterns? This investigation examined the effects of the day of receipt of an e-mailed survey instrument on: 1) the response rate, 2) the length of time lapsed in responding by scholars, and 3) the quality of the response as measured by the number of nominations to a panel of experts. With the increased expectation of Extension program accountability, one of the most frequently used evaluation methodologies, survey research, is becoming less useful due to declining response rates.

Methods

The target population consisted of authors from the United States who published in one or more of the following journals: Journal of Agricultural Education, Journal of International Agricultural and Extension Education, or Journal of Extension. The sample frame was developed by the researchers through a listing of all authors who had published in one or more of these journals between January 2004 and August 2005. The accessible population included 192 authors. Five authors from the target population were deleted, two because of direct involvement in the study and three who had undeliverable e-mail addresses.

Authors were randomly assigned in this experimental study to receive the e-mailed survey questionnaire at the beginning of each workday, Monday through Friday, with one-fifth of the authors receiving the questionnaire each of the 5 days. An individual and personalized e-mail was sent to each author after the close of business on the previous day prior to arrival. The original e-mail message was sent to the Monday group on November 7, 2005, and each of the following four workdays. Because of the Thanksgiving holidays, the Thursday and Friday groups had two workdays less to respond in the 28-day period.

The questionnaire asked recipients to nominate themselves or colleagues to participate in a research project of common professional interest. The attempted census of an accessible population was treated as a time and place sample (Oliver & Hinkle, 1982), and inferential statistics were used in the analyses.

The independent variable, day delivered, was recorded nominally for each potential participant in the study as the day on which the e-mail questionnaire was delivered to that person (i.e., as Monday, Tuesday, Wednesday, Thursday, or Friday). Then, the value for each of the three dependent variables, response (operationalized nominally as yes or no), days to respond (if returned, operationalized as number of days to respond), and quality of response (operationalized as number of nominees provided), was recorded as responses were received. The variable "days to respond" was recorded as the number of workdays, Monday through Friday, that transpired from the day the e-mail was sent to the day response was received.

Consequently, receipt of a response on a weekend day was assigned the same value as the subsequent Monday. Data were collected for 35 days following the 5 days of delivery of the e-mailed questionnaires. Because "response" was the variable under examination, no follow-up of non-respondents was conducted.

Data were recorded in an Excel database and analyzed using SPSS/v.13. Descriptive statistics including frequencies, means, and standard deviations were used to describe response rate. Due to the categorical nature of the measures, Chi-square analysis was used to examine the day of receipt/rate of response relationship. One-way analyses of variance (ANOVA) were used to compare days to respond and quality of response (dependent variables) as influenced by day of receipt (the independent variable). ANOVA was selected as an inferential statistic due to the categorical measure of the independent variable (day of the week) and the interval measure of the dependent variables.

Findings

E-mailed questionnaires were sent on five consecutive days to 192 authors of three professional journals, with approximately one-fifth of the possible participants receiving their e-mails each of the 5 days. Data collection was terminated 35 days after the e-mail was received. Thus, each potential respondent--regardless of day e-mail was sent--was given 35 days to respond. At the conclusion of data collection, data that had been received yielded the following results.

Of the 192 potential participants contacted, 60 responded, a response percentage of 31.25% (Table 1). Response rate by day of week contacted ranged from a low of 20.51% from Monday contacts to a high of 43.59% from those contacted on Wednesday. On average, participants responded in 4.57 days (SD=5.00). Those contacted on Monday tended to respond most quickly (3.25 days), while Friday contacts responded most slowly (5.90 days). The number of nominations provided by participants ("quality of response") averaged 4.76 nominees (SD=4.74), with a range of 3.21 (for Tuesday contacts) to a high of 7.25 (for Monday contacts). Also calculated were the total nominations per day. Total nominations per day ranged from a low of 48 nominations from those respondents e-mailed on Tuesday to a high of 67 nominations from Thursday participants.

Table 1.
Number of Possible Participants, Number of Responses, Quality of Responses, and Average Days to Respond Based on Day Contacted

Day of Week E-mailed / Contacted Number of Possible Participants E-mailed Number of Responses Mean Days to Respond Quality of Responses (Mean Number of Nominees) Total Nominations/Day (Number of Responses X "Quality of Responses")
Monday 39 8 3.25 7.25 58
Tuesday 40 15 3.33 3.21 48
Wednesday 39 17 5.24 3.41 58
Thursday 39 10 5.00 6.70 67
Friday 35 10 5.90 5.30 53
Totals/Means 192 60 4.57 4.76 281

To examine the results of the study inferentially, "day of week contacted" and "response/no response" were cross-tabulated, and a chi-square analysis was conducted. Data in Table 2 show the results of that analysis. Based on the chi-square value of 5.27 (p=.26), there is little evidence to suggest that day of the week contacted and rate of response are associated.

Table 2.
Chi-square Analysis of Association Between Day of Week Contacted and Rate of Response

Day of Week Contacted Response?
Did Respond Did Not Respond Total Chi-square
Monday 8 31 39  
Tuesday 15 25 40  
Wednesday 16 23 39  
Thursday 10 29 39  
Friday 10 25 35  
Total 59 133 192 5.27ns

Next, the dependent variable "days to respond" was examined based on the independent variable "day of week contacted." An ANOVA was used to compare the average days to respond among the 5 days of the week on which participants were contacted. Data in Table 3 provide the results of the analysis.

Table 3.
ANOVA Comparing Number of Days to Respond by Day of the Week Contacted

Day of the Week Contacted Frequency Mean Number of Days to Respond S.D. F
Monday 8 3.25 3.73  
Tuesday 15 3.33 3.96  
Wednesday 17 5.24 5.92  
Thursday 10 5.00 4.50  
Friday 10 5.90 6.23  
Total/Average 60 4.57 5.00 .624ns

On average, each of the 60 participants responded in 4.57 days. For the various days of the week, mean days to respond ranged from 3.25 days (for those contacted on Monday) to 5.90 days (for Friday contacts). An inferential comparison of the five means resulted in a statistically insignificant F(4, 55) = .624, p = .64.

Finally, "quality of response" (operationalized as number of persons nominated) was examined based on day of week contacted (Table 4). An ANOVA was performed to compare quality of response by day of the week on which participants were contacted.

Table 4.
ANOVA Comparing Quality of Response by Day of the Week Contacted

Day of Week Contacted Frequency Quality of Response (Mean Number of Nominees) S.D. F
Monday 8 7.25 7.17  
Tuesday 15 3.00 2.39  
Wednesday 17 3.41 2.35  
Thursday 10 6.70 6.65  
Friday 10 5.30 5.06  
Total/Average 60 4.68 4.74 1.99ns

The mean number of nominees ranged from 3.00 (for those participants contacted on Tuesday) to 7.25 (for participants e-mailed on Monday), with an overall mean number of nominees of 4.68 by each of the 60 respondents. The ANOVA revealed a statistically insignificant F(4, 55) = 1.99, p = .11.

Conclusions

Nonresponse error continues to concern survey researchers and Extension professionals. Our goal was to identify practices that increase the response rate for electronic survey research instruments. Researchers, including Bruzzone (1999), Dillman (2000), Dillman and Carley-Baxter (2000), Hewson, Yule, Laurent, and Vogel (2003), and Yun and Trumbo (2000) recognized that rapid change affects knowledge management systems. Consequently, there is an ongoing need to better understand the changing behaviors of "customers and organizations." The adoption of new electronic technologies, particularly e-mail, short message service (SMS), and radio frequency identification technology (RFID), changes the way we communicate with Extension audiences. Gingrich (2001) recognized two patterns of change stemming from computers and the combination of the nanotechnology-biology-information revolution. Gingrich called this the "age of transitions."

The literature is abundant with recognized best practices to improve effectiveness and efficiency of survey design and delivery. Recognized practices include salience, anonymity or confidentiality, general layout and format considerations, length of instrument, and order of questions. Several traditionally used practices, such as "fancy" layouts, handwritten postscripts, incentives, original signatures, and personalized cover letters, do not explain a significant difference in survey response.

When examining the effect of day of receipt of an electronic survey instrument on the response rate, we found no significant difference in the rate of response by day the instrument was e-mailed/received. Our target audience members, agricultural education and Extension journal authors, were just as likely to respond if they received the instrument on Monday as on any other workday.

Yun and Trumbo (2000) noted ". . . an interesting effect was observed in the timing of the e-mail and Web responses. Over 80% of the electronic responses were collected within three days after the initial e-mail was sent out" (p. 12). In the research reported here, the average response time 4.5 days. Further, the length of time taken by the target audience to respond was not associated with the day of receipt of the electronic survey instrument. This research found no significant difference in the length of response time based on the workday on which potential participants received the instrument.

The quality of the response, as judged by the number of nominations, was not associated with the day of receipt of the electronic survey instrument. This research found no significant difference in the quality of response as influenced by the workday on which subjects received the instrument.

Limitations

This target audience was a well defined, individually connected, and accurately identified cohort of authors. Their behavior may not be similar to that of other types of target audiences. This research sought simple response data asking for the identification and nomination of experts within a specified discipline. The behavior may not be similar when more complex issues or time-consuming requests are made. This research offered clear benefits to the target audience and generalized value for the larger professional organization. The behavior may not be similar when benefits are less obvious.

Recommendations

Because response rate was 37%, unacceptably low (Miller & Smith, 1983; Lindner, Murphy, & Briers, 2001), strategies must be employed to reduce the threat of nonresponse error. Consequently, the use of valuable best practices, including advance organizers such as postcards and repeated follow-up contacts, should be considered to increase participation of potential nonrespondents (Dillman, 2000). Efforts should be made to connect the value of survey research as a priority concern to the field of study.

In terms of future research, our findings offer two recommendations. First, although e-mail is a valuable tool for data collection by Extension professionals, the day of receipt does not affect response patterns. Future research is needed to validate the efficacy of approved best practices (Dillman, 2000). Second, this research has validated that response rate issues in survey research are complex and multi-faceted. Response rate is likely a complex interaction of audience, time, innovation, modality, meaning, and value.

References

Brashears, M.T., Akers, C., & Bullock, S. (2003). A test of a bimodal survey model on the cooperative communicators association: A case study. Proceedings of the National Agricultural Education Research Conference, December 9-11, Orlando, FL. Retrieved September 6, 2006, from http://aaae.okstate.edu/proceedings/2003/Proceedings.pdf#search=%222003%20NAERC%22

Bruzzone, D. (1999). The top 10 insights about the validity of conducting research online. Advertising Research Foundation. Retrieved November 19, 2005, from http://www.swiftinteractive.com/white2.asp

Dillman, D. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York: John Wiley and Sons.

Dillman, D., & Carley-Baxter, L. (2000). Structural determinates of mail survey response rates over a 12 year period, 1988-1999. Retrieved November 19, 2005, from http://www.sesrc.wsu.edu/dillman/papers/2000%20ASA%20Proceedings--Dillman.pdf

Dillman, D., Tortora, R., & Conradt, J. (1998). Influence of plain vs. fancy designs on response rates for web surveys. Retrieved November 19, 2005, from http://www.sesrc.wsu.edu/dillman/papers/asa98ppr.pdf

Fraze, S. D., Hardin, K. K., Brashears, M. T., Haygood, J. L., & Smith, J. H. (2003). The effects of delivery mode upon survey response rate and perceived attitudes of Texas agri-science teachers. Journal of Agricultural Education, 44(2), 27-37.

Gingrich, N. (2001). Vision for the converging technologies. In M. Roco & W. Bainbridge. (2005). Converging technologies for improving human performance: Nanotechnology, biotechnology, information technology and cognitive science (pp. 37-55). The Netherlands: Kluwer Academic Publishers.

Hewson, C., Yule, P., Laurent, D., & Vogel, C. (2003). Internet research methods: A practical guide for the social and behavioural sciences. London: Sage Publications, Inc.

Lindner, J. R., Murphy, T. H., & Briers, G. E. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42(4), 43-53.

Mehta, R., & Sivadas, E. (1995). Comparing response rates and response content in mail versus electronic surveys. Journal of the Market Research Society, 4(37), 429-440.

Miller, L. E., & Smith, K. L. (1983). Handling nonresponse issues. Journal of Extension, 21(5), 45-50.

Nie, N., Hillygus, S., & Erbring, L. (2002). Internet use, interpersonal relations and sociability: Findings from a detailed time diary study. In B. Wellman (Ed.), The Internet in everyday life (pp. 215-243). London: Blackwell Publishers.

Oliver, J., & Hinkle, D. (1982). Occupational educational research: Selecting statistical procedures. Journal of Studies in Technical Careers, 4(3), 199-207.

Phillips, M. (1941). Problems of questionnaire investigation. Research Quarterly, 12, 528-537.

Rea, L., & Parker, R. (1997). Designing and conducting survey research (2nd Edition). San Francisco: Jossey-Bass.

Schaefer, D., & Dillman, D. (1998). Development of a standard e-mail methodology: Results of an experiment. Public Opinion Quarterly, 3(62), 378-390. Retrieved November 5, 2005, from http://survey.sesrc.wsu.edu/dillman/papers/E-Mailppr.pdf

Sheehan, K. (2001, January). E-mail survey response rates: A review. Journal of Computer-Mediated Communication, 6(2). Retrieved November 5, 2005, from http://jcmc.indiana.edu/vol6/issue2/sheehan.html

Sheehan, K., & Hoy, M. (1999). Using e-mail to survey Internet users in the United States: Methodology and assessment. Journal of Computer Mediated Communication, 4(3). Retrieved November 19, 2005, from http://jcmc.indiana.edu/vol4/issue3/sheehan.html

Tse, A. (1998). Comparing the response rate, response speed and response quality of two methods of sending questionnaires: E-mail vs. mail. Journal of the Market Research Society, 40(4), 353-361.

Tse, A., Tse, K., Yin, C., Ting, C., Yi, K., Yee, K., & Hong, W. (1995). Comparing two methods of sending out questionnaires: E-mail versus mail. Journal of the Market Research Society, 37(4), 441-46.

Walonick, D. (n.d.) Everything you wanted to know about questionnaires but were afraid to ask. Retrieved November 6, 2005, from http://www.statpac.com/research-papers/questionnaires.htm

Witmer, D., Colman, R., & Katzman, S. (1999). From paper-and-pencil to screen-and-keyboard: Toward a methodology for survey research on the Internet. In S. Jones (Ed.), Doing Internet research: Critical issues and methods for examining the Net (pp. 145-161). Thousand Oaks, CA: Sage.

Wright, K. (2005). Researching Internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. Journal of Computer-Mediated Communication, 10(3), article 11. Retrieved November 19, 2005, from http://jcmc.indiana.edu/vol10/issue3/wright.html

Yun, G., & Trumbo, C. (2000). Comparative response to a survey executed by post, e-mail and web form. Journal of Computer Mediated Communication, 6(1). Retrieved November 21, 2005, from http://jcmc.indiana.edu/vol6/issue1/yun.html#conclusion