June 2008 // Volume 46 // Number 3 // Research in Brief // 3RIB3

Previous Article Issue Contents Next Article

Response Rates to Expect from Web-Based Surveys and What to Do About It

Abstract
This article reports research that calculated the response rates of 84 Web-based surveys deployed over 33 months. Response rate varied by survey type: (1) Meeting/Conference Evaluations - 57%; (2) Needs Assessments - 40%; (3) Output/Impact Evaluations - 52%; (4) Ballots - 62%. Having a high survey response rate is critical. When resources permit, reduce non-response error. Considering cost versus benefit, a less than optimum (85%) response rate for needs assessments/conference evaluations may not be critical. A breadth and depth of respondent reactions will provide much information for program development. Dealing with non-response error for program/impact will generate the most value.


Thomas M. Archer
Leader, Program Development and Evaluation
Ohio State University Extension
Columbus, Ohio
archer.3@osu.edu


Introduction

Since the beginning of this decade there has been an explosion of use of Web-based survey technology. Advantages of time, cost, and data entry are cited as the most appealing features of Web-based surveys (Wright, 2005). However, one question that is often asked is, "What should I expect in terms of a response rate for my Web-based survey?"

Please note that response rate expectation and acceptability of a particular response rate are not the same. This article does not directly deal with non-response error as detailed in a Journal of Extension article (Miller & Smith, 1983). That article highlighted several ways to deal with non-response error, most of which determined the likelihood that non-respondents differ from respondents. The person who administers a questionnaire through Web-based survey technology must determine if there are resources to deal with potential non-response error. If such resources exist, then those resources should be utilized to obtain the highest possible response rate. If not, then that person should report only what the respondents contributed and not generalize to all those surveyed.

The best way to deal with non-response error is to increase the response rate through the questionnaire design and deployment processes. There are several reported reasons why potential respondents fail to complete a Web-based survey. These include questions arranged in tables, graphically complex designs, pull-down menus, unclear instructions, and the absence of navigation aids (Bosnjak & Tuten, 2001).

Some factors that have been found to increase response rates include personalized email invitations, follow-up reminders, pre-notification of the intent to survey, and simpler formats (Cook, 2000; Solomon, 2001). Other factors that increase response rates include: incentives, authentic sponsorship, and multi-modal approaches (Johnson, 2005). There are some factors that may influence response rate for which no formal studies have been identified. These include the age of potential respondents, population to which the survey is administered, and the purpose of the survey.

Most research on response rates for Web-based surveys has focused on manipulating either deployment or questionnaire variables in single survey situations. That is, in a given survey deployment, potential respondents are assigned to the various treatment groups.

There are many deployment and questionnaire variables that should be studied. These include:

  1. Total number of potential respondents (email invitations deployed);
  2. Number of email addresses bounced;
  3. Number of people opting out;
  4. Year launched;
  5. Month launched;
  6. Date of month launched;
  7. Number of reminders;
  8. Number of days left open;
  9. Days between launch and reminder;
  10. Days between reminders;
  11. Length of subject line;
  12. Length of invitation;
  13. Readability Level of Invitation;
  14. Total number of questions;
  15. Number of fixed response questions;
  16. Number of open-ended response questions;
  17. Number of one line open-ended questions;
  18. Number of Y/ N questions;
  19. Number of demographic questions;
  20. Number of headings;
  21. Length of rating scales in rating questions; and
  22. Readability level of survey.

When the above listing of variables was reviewed through a regression analysis in a study of Zoomerang archived data (Archer, 2007), it was determined that the two variables, (1) Log of the Number of Potential Respondents and (2) Number of Days Left Open, generated the highest R2. These two variables could be used to explain 41.4% of the variability in the response rate. No other variable or combination of variables in the list above contributed more explanation.

One of the variables that has not been studied, but intuitively could influence response rate the most, is the purpose for which a questionnaire is deployed. For the study reported here, four different purposes of Web-based surveys were identified: (1) Meeting, Workshop, or Conference Evaluations; (2) Needs Assessments; (3) Impact Evaluations; and (4) Ballots.

Method

For a prior study, over a 2-year and 9-month period, the Ohio State University Extension Program Development and Evaluation Unit deployed 99 Web-based surveys (Archer, 2007). Questionnaires were sent to a variety of audiences associated with Extension. The current study is a retrospective investigation into the relationship of response rates and the intended purpose of Web-based surveys. The study used the same survey data as the previous study. These data are archived in Zoomerang, the platform through which these surveys were managed.

For each of the surveys deployed, a list of email addresses was supplied by the Extension professional requesting the survey. Each Extension professional also indicated that the best means of contacting potential respondents was through an email invitation. The surveys were administered to a variety of local, multi-county, statewide, and nationwide Extension audiences. All invitees were adults. All of these Web-based surveys included an individual email invitation to potential respondents. All surveys were deployed through Zoomerang, using the same template, background color, and Extension logo.

Each of these surveys was assigned to one of four categories: (1) Meeting, Workshop or Conference Evaluations; (2) Needs Assessments; (3) Impact Evaluations; and (4) Ballots. Surveys were categorized by reviewing the stated purpose of each of the studied surveys and having two evaluation professionals independently categorize each survey. When assignment to categories differed between the evaluators, the two evaluation professionals discussed the survey in question and reached a category agreement.

Because the number of reminders is highly correlated with the number of days that a survey is left open (Archer, 2007), only surveys that included two reminders were selected to be further studied. Eighty-four of the 99 surveys included two reminders and were included in the following analysis. In other words, making the number of reminders constant for all surveys in the current study eliminated the influence of the number of reminders on the response rate.

Response rate percentage was calculated by taking the total number of completed questionnaires divided by total email invitations originally deployed, multiplied by 100. Data to calculate response rates were archived in the Web survey program database. An Excel spreadsheet was developed for data entry, and the data extracted for each of the surveys. The data were then placed in the appropriate cells in the spreadsheet. The Excel data was imported into the Statistical Package for the Social Sciences (SPSS) for data analysis.

Findings

For each of the four survey types relating to the purpose of the survey, the counts and calculations were completed to include the Number of Surveys, Mean Response Rate for each type of survey, the Mean Days that each type of survey was left open, and the Number of Potential Survey Respondents. The mean response rate is the average of the response rates for all surveys within each grouping of deployed surveys. See Table 1.

There were 26 Meeting, Workshop or Conference Evaluations, left open for an average of 13.8 days to an average of 167 potential respondents with a mean response rate of 57%. There were also 40 Needs Assessments, left open for an average of 14.2 days to an average of 531 potential respondents with a mean response rate of 39.7%. There were 14 Impact Evaluations, left open for an average of 14.9 days to an average of 161 potential respondents with a mean response rate of 51.4%. Only 4 Ballots were included, left open for an average of 16.2 days to an average of 143 potential respondents resulting in a mean response rate of 62.2%.

Table 1.
Purpose of Web-Based Survey described by Number of Surveys, Mean Response Rates, Mean of Days Left Open and Mean of Email Invitations Originally Deployed

Purpose of Web-Based SurveyNumber of SurveysMean Response RateMean Days Left OpenMean of Email Invitations Originally Deployed
Meeting, Workshop or Conference Evaluation2657.0%13.8167
Needs Assessment4039.7%14.2531
Impact Evaluation1451.4%14.9161
Ballot462.2%16.2143
Overall8448.3%  

Discussion

From the study of 84 Zoomerang surveys reported here, the mean response rate for Web-based surveys was highest for post-conference questionnaires. Although no references could be found that specifically studied response rates to post-conference questionnaires, it is anticipated that the observed mean response rate from the study (57%) would be consistent with paper-pencil questionnaires collected at the end of a conference and maybe even higher than a mailed questionnaire following a meeting or conference.

The lowest response rates found in this study were from needs assessment types of questionnaires. This may have been because of the very nature of a needs assessment. First, perhaps not all of the right people were identified to respond, and therefore many of the potential respondents felt that the survey was not relevant to them. Then there were no doubt other potential respondents who were not comfortable with responding or who did not know how to respond, to questions relating to their needs.

Lindner, Murphy, and Briers (2001) indicated that steps must be taken to account for possible non-response error whenever a response rate is less than 85%. Such a high response rate is possible, but not likely.

The resources necessary to complete follow-up contacts to Web-based survey non-respondents would be similar to the expense of traditional mail surveys. Costs to raise response rates from the achieved response rates (39.7% to 62.2%) to the optimum 85% would be significant in terms of both time and money.

But it is not always necessary to have an 85%+ response rate to obtain valuable information. For example, with end-of-meeting surveys and needs assessments, non-response may not be as critical. If the primary goals of these types of surveys are to gain suggestions for direction and improvement or obtain a measure of quality, then the responses are just as meaningful when a breadth and range of response is obtained, even with lower response rates.

Although it would be desirable to apply findings in any survey effort to the entire potential respondent pool, having responses from 40% or less of the potential respondents is still a great deal of information. The effort and costs (cash expenses and time) to increase the response rate to 85%, if even possible, would not justify the additional information that might be obtained in terms of determining priorities or measurement of quality. Program improvement and program development could still be well-served without an overwhelming response rate.

Impact surveys would provide the most return for implementing procedures to compare non-respondents to respondents in order to eliminate non-response error. If one could prove, for example, that the economic gain from a given Extension program was applicable to all the program participants, rather than just the 52% who responded to the impact questionnaire, it would be a much more powerful and useful finding. Those techniques to compare respondents to non-respondents would be more worthwhile for such impact surveys.

Conclusions

For Extension, in-house, Web-based surveys, expect response rates by survey type:

  1. Meeting or Conference Evaluations - 57%;

  2. Needs Assessments - 40%;

  3. Output or Impact Evaluations - 51%; and

  4. Ballots - 62%.

When resources permit, implement activities to reduce or eliminate non-response error. The best methods are to utilize procedures in the original survey deployment that will ensure higher responses. Other procedures include comparing known characteristics of respondents with non-respondents, comparing late to early respondents, or random sample a portion of the non-respondents and follow-up with a telephone or personal interview.

Considering cost versus benefit, a less than optimum (<85%) response rate for needs assessments or conference evaluations may not be critical. A breadth and depth of respondent reactions and suggestions will provide much information for program development, much more than no information at all.

Finally, dealing with non-response error for program impact will generate the most value for the extra effort.

References

Archer, T. M. (2007). Characteristics associated with increasing the response rates of Web-based surveys. Practical Assessment Research & Evaluation, 12(12). Retrieved June 12, 2008 from: http://pareonline.net/getvn.asp?v=12&n=12

Bosnjak, M. M., & Tuten, T. L. (2001). Classifying response behaviors in Web-based surveys. Journal of Computer-Mediated Communication, 6(3). Retrieved June 12, 2008 from: http://jcmc.indiana.edu/vol6/issue3/boznjak.html

Cook, C. (2000). A meta-analysis of response rates in Web- or Internet-based surveys. Educational and Psychological Measurement, 60(6), 821-836

Johnson, D. (2005). Addressing the growing problem of survey non-response, Powerpoint presentation at the Survey Research Center Colloquium in October of 2005 at Penn State University, University Park, PA. Retrieved June 12, 2008 from: http://www.ssri.psu.edu/survey/Nonresponse1.ppt

Lindner, J. R., Murphy, T. H., & Briers, G. E. (2001). Handling non-response in social science research. Journal of Agricultural Education, 42(4), 43-53.

Miller, L. E., & Smith, K. L. (1983). Handling non-response issues. Journal of Extension [On-line], 21(5). Available at: http://www.joe.org/joe/1983september/83-5-a7.pdf

Solomon, D. J. (2001). Conducting Web-based surveys. Practical Assessment, Research & Evaluation, 7(19.) Retrieved June 12, 2008 from: http://pareonline.net/getvn.asp?v=7&n=19

Wright, K. B. (2005). Researching Internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and Web survey services. Journal of Computer-Mediated Communication, 10(3), article 11. Retrieved June 12, 2008 from: http://jcmc.indiana.edu/vol10/issue3/wright.html