June 2003 // Volume 41 // Number 3 // Commentary // 3COM1

Previous Article Issue Contents Previous Article

On the Reporting of Response Rates in Extension Research

Abstract
Extension researchers have been encouraged to report response rates obtained in sample surveys. Unfortunately, there is little agreement among survey researchers as to the exact meaning of this term or how it should be calculated and operationally defined. Recently, the work of an industry-wide task force attempted to resolve this problem by recommending alternative definitions and methods of calculation that could be used. Researchers are encouraged to implement the task force's recommendations so that others might be in a better position to properly evaluate the projectability of survey results.


Frederick Wiseman
Professor of Marketing and Statistics
Northeastern University
Boston, Massachusetts
Internet Address: f.wiseman@neu.edu


In a recent Journal of Extension article, Lindner and Wingenbach (2002) presented the results of an investigation into the treatment of non-response error in Research in Brief articles appearing in this journal from 1995 to 1999. One of the conclusions reached was that researchers should report response rates and discuss how potential non-response error was handled because failure to do so brings the validity of survey findings into question.

Surveys that have high response rates provide a measure of reassurance that the findings that are obtained can be projected to the population from which the sample was drawn. On the other hand, findings that are obtained in surveys that have low response rates can be questioned because little, if anything, is known about whether non-respondents differ from respondents.

During the last quarter century, there has been a general lack of industry-wide standards with respect to the meaning, interpretation, and method of calculation of a survey's response rate. There are numerous reasons for this, including the emergence of more complex sampling and data collection methods that have made the computation of a response rate more difficult. With declining response rates, some researchers have creatively redefined the term to suggest a higher quality data collection effort than was actually the case. As a result, researchers should not only report a response rate as noted by Lindner and Wigenbach, but they should also give the details as to how the rate was calculated. Unfortunately, this is not always done. In such situations, a reported response rate provides little, if any, useful information.

Two task forces, one formed in 1982 and the other in 2000, have sought to develop a standardized definition and reporting procedure for the response rate in a survey. This Commentary discusses some of the recommendations that were made by these task forces in an attempt to bring about industry-wide standards. I hope that researchers will adopt the recommendations so that when a response rate is reported, all will know how it is calculated and what it implies.

Background

The size of the non-response error in any survey is a function of two factors:

  1. Response rate and
  2. Extent to which respondents differ from non-respondents.

If either a high response rate is achieved or if respondents do not differ from non-respondents, then non-response error is not a problem. In fact, non-response error is only a problem if a low response rate is achieved and respondents differ from non-respondents on one of more of the variables of interest. Because it is difficult to assess whether differences exist between respondents and non-respondents, the response rate (and how it was calculated) should always be reported. Lindner and Wingenbach found that a survey's response rate was reported in 50 out of the 61 surveys that they investigated.

The conclusion reached by Lindner and Wigenbach was consistent with the call made 20 years earlier by Miller and Smith (1983). In their article, Miller and Smith noted that the practice of ignoring non-respondents leads many people to question the overall validity of survey research and that non-respondents cannot be ignored if evaluation studies are to have external validity. More recently, Lindner, Murphy and Briers (2001) indicated that steps must be taken to account for possible non-response error whenever a response rate is less than 85%.

The problem of non-response is common to all those who conduct surveys, and over the last quarter century, numerous researchers have cautioned about the problem. Smith (1999) provides an excellent review of this literature. For example, in 1978, in response to a request from the National Science Foundation and the American Statistical Association, Bailar and Lampier (1978) sought to determine the extent to which government-funded surveys had met their objectives. They found that due, to a variety of technical flaws, including low response rates, 22 of the 36 surveys that they examined did not accomplish what they had been designed to do.

At the same time, members of the US Congress became concerned about the possibility that poor quality survey data were being used for decision-making purposes. The Congress asked the General Accounting Office to determine the likelihood that incorrect or unreliable information was being generated by opinion polls and attitude surveys that were conducted by the federal government. The results of this investigation (Comptroller General of the United States,1978) were similar to those reported by Bailar and Lampier.

In addition, with the support of the Marketing Science Institute and CASRO, a trade association whose members are major US public opinion research firms, Wiseman and McDonald (1978) conducted an industry-wide study of non-response in the commercial research sector. They found that, on average, 40% of all selected sample members were never contacted and that approximately one in four sample members who were contacted refused to be interviewed.

When these results were presented to the CASRO membership, questions arose as to how response rates should be calculated. There was also disagreement as to the meaning of this term. In response to this, Wiseman and McDonald (1980) conducted another study in which research directors at CASRO firms were surveyed. These research directors were given the response outcomes for three surveys and asked to calculate the response rate in each survey. The data for a telephone survey, in which all selected respondents were eligible to be interviewed, is given in Table 1.

Table 1.
Telephone Survey Results

Outcome

Number

Disconnected/non-working number

426

Household refusal

153

No answer, busy, not at home

1757

Interviewer reject (language barrier, hard of hearing, . . .)

187

Respondent refusal

366

Termination by respondent during the interview

74

Completed interview

501

Total

4175

The response rate for this survey that was calculated by each of the research directors in the sample ranged from a low of 12% to a high of 90%. In total, the 40 respondents gave 29 different definitions, with the most frequently reported definition being given only three times.

CASRO and AAPOR Task Forces

In light of these results, the CASRO Board of Directors formed a special task force. This task force had as its principal objective the establishment of a standardized definition and a reporting procedure for survey response rates. The task force, which included representatives from the Bureau of the Census, Office of Management and Budget, commercial research organizations and academia, recommended the following definition (CASRO, 1982):

Response rate  = 
Number of completed interviews with reporting units
divided by
Number of eligible reporting units in the sample

The task force provided this overall definition for response rate, but noted that in many surveys it would not be possible to determine the eligibility of certain selected reporting units. Thus, certain estimation procedures would be necessary.

While the survey research industry wrestled with the problem of non-response in the 1980s and 1990s, it was not until 3 years ago that a major undertaking took place under the auspices of the American Association for Public Opinion Research (AAPOR <http://www.aapor.org/>). This organization, whose membership includes survey research professionals, created a task force to build upon the work of the CASRO task force and to provide the necessary details that had been missing prior to that time. Their report <http://www.aapor.org/pdfs/newstandarddefinitions.pdf> outlined how the response rate should be defined and calculated in various types of surveys. Actually, six alternative response rate formulas and methods of calculation are given because the appropriate formula to use depends, in part, upon what assumptions are made regarding those sample members whose eligibility could not be determined. The task force made the following recommendation (AAPOR, 2000):

In reporting response rates, . . . researchers must precisely define which rates are being used. For example, a statement that "the response rate is X" is unacceptable. One must report on exactly which rate was used such as "Response Rate 2 was X." In addition, a table showing the final disposition codes for all cases should be prepared for the report and made available upon request.

The calculation of a response rate in a survey is facilitated by a Response Rate Calculator <http://www.aapor.org/default.asp?page=survey_methods/response_rate_calculator>. This is an Excel spreadsheet, provided by the task force, that calculates a response rate once the researcher provides such data as the number of sample members originally selected, the number of refusals, and the number of sample members not contacted.

Conclusion

At the beginning of this Commentary, I mentioned that 50 out of the 61 surveys appearing in Research in Brief articles from 1995 to 1999 reported a response rate. However, in not all instances did the researchers present the details as to how the rate was calculated. I hope that, with the implementation of the CASRO and AAPOR recommendations, a standardization of the reporting of response rates can be achieved and that the response rate for each survey reported in this and in other journals will be calculated and interpreted in a similar fashion. At the same time, attention must also be focused on steps to achieve high response rates and to determine the extent to which respondents differ from non-respondents in sample surveys.

References

Bailar, B., & Lamphier, M. (1978). Development of survey methods to assess survey practices. Washington, D.C.: American Statistical Association.

Council of American Survey Research Organizations. (1982). Special report: On the definition of response rates. Port Jefferson, NY: CASRO.

Comptroller General of the United States (1978). Better guidance and controls needed to improve federal surveys of attitudes and opinions. GAO, GGD-78-24.

Lindner, J. R., Murphy, T. H., & Briers, G. E. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42(4), 43-53.

Lindner, J. R., & Wingenbach, G. J. (2002) Communicating the handling of nonresponse error in research in brief articles. Journal of Extension [On-line], 40 (6). Available at: http://www.joe.org/joe/2002december/rb1.shtml

Miller, L. E., & Smith, K. L. (1983). Handling nonresponse issues. Journal of Extension [On-line], 21(5). Available at: http://www.joe.org/joe/1983september/83-5-a7.pdf

Smith, T. W. (1999). Developing nonresponse standards. International Conference on Nonresponse. Available at: http://www.norc.uchicago.edu/online/nonre.htm

The American Association for Public Opinion Research. (2000). Standard definitions: Final dispositions of case codes and outcome rates for surveys. Lenexa, Kansas: AAPOR.

Wiseman, F., & McDonald, P. R. (1978). The nonresponse problem in consumer telephone surveys. Cambridge, MA: The Marketing Science Institute.

Wiseman, F. & and McDonald, P. R. (1980). Toward the development of industry standards in the reporting of response rates in survey research. Cambridge, MA: The Marketing Science Institute.