The Journal of Extension - www.joe.org

October 2014 // Volume 52 // Number 5 // Tools of the Trade // v52-5tt6

Controlling Survey Response Error in a Mail Survey of Dairy Farmers: A Case Report

Abstract
Survey results are often presented with minimal description of how survey error was controlled. The objective of this article is to help survey developers and survey data interpreters (1) better understand the sources of survey error and (2) consider these sources of error when developing or reading a description of a survey's methodology. The subject of this article is a survey mailed to Vermont dairy farmers to assess farm characteristics relevant to the potential spread of disease among farms. Sources of error and how they were addressed in this survey are presented.


Julia M. Smith
Extension Associate Professor
University of Vermont
Burlington, Vermont
julie.m.smith@uvm.edu

Introduction

All surveys are vulnerable to error. When survey results are reported without considering the impact of survey error, improper conclusions can be reached. Potential errors are categorized as coverage, sampling, measurement, and nonresponse (Groves, 1989, 2004 cited by Dillman, Smyth, & Christian, 2009). A description of each type of error and how it was addressed in a mail survey is presented here. The intent of this article is to help others be more intentional in designing surveys to minimize error and to help those interpreting or acting on survey results to be more aware of potential sources of error that may bias the results.

This article describes how survey error was addressed in the implementation of a survey mailed to Vermont dairy farmers to assess farm characteristics relevant to the potential spread of disease among farms.

Types of Survey Errors and Where to Look For Them

Survey design and implementation should consider potential sources of overall survey error: coverage, sampling, measurement error, and nonresponse. These are briefly described in Table 1.

Table 1.
Types of Survey Error

Coverage Coverage error reflects the extent to which the sample included in a survey is not exactly representative of the survey target population. The study methods should specify how the sample frame was obtained. This will differ depending on what mode is being used to administer the survey, i.e., telephone, internet, mail or a combination. Each mode has inherent coverage issues that need to be considered (Dillman, Smyth, & Christian, 2009, pp. 43-49). You can assess coverage by asking the question: Would everyone in the survey population have the same chance of being included in the sample?
Sampling Sampling error is the result of surveying only a sample of the survey target population. This amounts to the difference between the sampled respondents and the entire survey population. Sampling error is highly dependent on sample size, hence the common recommendation that larger sample sizes are almost always better. Of all the types of error, sampling is the most quantifiable as it can be calculated mathematically. This is the familiar reported margin of error, i.e., ± 3%. The impact of sample size on margin of error and cost is discussed by Verma & Burnett (1996).
Measurement Measurement error has to do with the accuracy of the information collected. This can be affected by question wording, design, or delivery. Willingness to provide certain types of information (even to a stranger promising anonymity of the data) can influence whether the truth is reported. Recall limitations can affect reporting of behavioral data. Collecting solid data on attitudes and opinions is more difficult than collecting factual information. Knowing how a question was worded may provide clues to potential measurement error. The importance of testing questionnaires is emphasized by Radhakrishna (2007).
Nonresponse Nonresponse error results from not everyone who was sampled responding to the survey. If those who do not respond differ from respondents in a way that is important to the study, e.g., different types of people or business owners, then the results will not be generalizable as intended. Using more than one mode (i.e., a mixed-mode survey) to assess a sample may help reduce nonresponse error as well as address coverage and measurement error (Tobin, Thomson, Radhakrishna, & LaBorde, 2012). If you can get non-responders to tell you about themselves (even if they won't complete the survey), you will have insights into the nonresponse error. Nonresponse error and ways to address is have been discussed by Miller & Smith (1983), Lindner & Wingenbach, (2002), and Radhakrishna & Doamekpor (2008).

How This Survey Was Designed to Minimize Error

The chosen survey population was all cattle dairy farms shipping milk in the state of Vermont. The sample frame was a list of inspected dairy farms (with small ruminant dairies removed) maintained by the state dairy sanitarian's office. Coverage errors were expected to be minimal.

Based on a sample size calculation, to achieve a 5% margin of error with 95% confidence of the true population value, a completed sample size of 278 was needed out of the estimated 1,000 cattle dairies in the state. Past survey experience indicated that a 55% return rate was achievable, so sampling 500 dairies was expected to be adequate. A stratified quasi-random sample was selected to receive the survey mailings. Every other name on the list of all cattle dairies, sorted by zip code, was selected for inclusion in the sample.

Measurement error was addressed by following many recommendations for survey wording, formatting, and visual elements of the survey based on Dillman, Smyth, & Christian (2009). The survey was professionally typeset and included a color image on the cover page. Several dairy farmers reviewed the instrument as it was being developed and provided useful feedback. Five other dairy farmers pre-tested the instrument.

Nonresponse was addressed by promoting the survey in several ways and following the principles of tailored design laid out by Dillman, Smyth, & Christian (2009). Prior to survey distribution, the researcher spoke at six dairy cooperative membership meetings at locations near high densities of dairy farms. Her message focused on the value of the data for preparing the state to respond effectively to a highly contagious disease event. A pre-notice letter was sent to the entire survey population to explain what the survey was about. The sample population then received the survey with cover letter, a follow-up postcard thanking them for or encouraging their participation, a repeat survey mailing with updated cover letter, and another follow-up postcard. Questions requesting zip code, farm size, and type information were included to enable later assessment of how representative respondents were of the population.

Findings and Discussion

The mailed survey instrument was a total of eight 8 1/2" x 11" pages long. The body of the survey, consisting of 31 different questions of varying complexity, was formatted on four pages of the instrument. Calculated according to the American Association for Public Opinion Research (2009) as recommended by Wiseman (2003), the response rate was 54%.

Coverage was excellent because the dairy sanitarian's office partnered with the researcher in mailing the surveys. In the handful of instances where change of address information was received, materials and surveys were re-mailed to the corrected addresses. In a few cases, farms did not meet the criteria for inclusion (e.g., no longer in business) or did not wish to participate.

Using a regulatory contact list minimized coverage error, but precluded tracking of nonrespondents to target subsequent mailings. The list was not shared with the researcher, and all mailing labels were printed and affixed in the sanitarian's office. This meant duplicate surveys were mailed to all farms in the sample, even those that had already completed surveys. However, by using sampling, the total number of mailings was less than if we had simply used census sampling (and mailed surveys to every address in the population). Re-mailing surveys and using reminders increased the response rate. Distinct waves of survey returns followed each mailing. The final number of responses was slightly below the target, but in the author's opinion the margin of error was acceptable.

Despite attention to the potential for measurement error, this was still a concern. Duplicate returns from the same farm were considered unlikely because of the length of the survey. Simplifying complex questions while keeping the survey length within reason was not easy, given the choice to implement a self-administered mail survey. Pre-testing revealed that respondents may not read the instructions before answering questions, underscoring the need to format the response area intuitively. Despite multiple rounds of testing and proofreading, a typo slipped through in a key question.

Nonresponse error was considered likely for this survey. To address whether farms that chose not to participate differed in important ways from those who did participate, the analysis confirmed that the respondents were geographically and functionally (organic versus conventional) representative. Another approach to investigating nonresponse involved comparing key characteristics of early and later respondents. Targeting the sample population with multiple mailings reduced nonresponse; however, nonresponse to specific questions or parts of questions within the survey was still an issue. Analyses were conducted based on the number of responses to individual questions.

Conclusions

Conducting a survey to obtain valid generalizable results requires careful planning. Attention to detail in preparing and implementing a survey can reduce errors due to issues with coverage, sampling, measurement, and nonresponse. Consulting a textbook or guide that addresses current issues in design and implementation of surveys will help you achieve your survey goals.

References

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009) Internet, mail, and mixed-mode surveys: the tailored design method (3rd ed.). Hoboken, NJ: Wiley.

Groves, R. M. (1989, 2004) Survey errors and survey costs. New York: Wiley.

Lindner, J. R., & Wingenbach, G. J. (2002) Communicating the handling of non-response error in Journal of Extension research in brief articles. Journal of Extension [On-line] 40(6) Article 6RIB1. Article available at: http://www.joe.org/joe/2002december/rb1.php

Miller, L. E., & Smith, K. L. (1983) Handling nonresponse issues. Journal of Extension [On-line] 21(2) Available at: http://www.joe.org/joe/1983september/83-5-a7.pdf

Radhakrishna, R. B. (2007) Tips for developing and testing questionnaires/instruments. Journal of Extension [On-line] 45(1) Article 1TOT2. Available at: http://www.joe.org/joe/2007february/tt2.php

Radhakrishna, R., & Doamekpor, P. (2008) Strategies for generalizing findings in survey research. Journal of Extension [On-line] 46(2) Article 2TOT1 Available at: http://www.joe.org/joe/2008april/tt1.php

The American Association for Public Opinion Research (2009) Standard definitions: final dispositions of case codes and outcome rates for surveys (6th ed.). AAPOR.

Tobin, D., Thomson, J., Radhakrishna, R., & LaBorde, L. (2012) Mixed-Mode surveys: A strategy to reduce costs and enhance response rates. Journal of Extension [On-line] 50(6) Article 6TOT8. Available at: http://www.joe.org/joe/2012december/tt8.php

Verma, S., & Burnett, M. F. (1996). Cutting evaluation costs by reducing sample size. Journal of Extension [On-line], 34(1) Article 1FEA2. Available at: http://www.joe.org/joe/1996february/a2.php

Wiseman, F. (2003). On the reporting of response rates in Extension research. Journal of Extension [On-line], 41(3) Article 3COM1. Available at: http://www.joe.org/joe/2003june/comm1.php