December 2002 // Volume 40 // Number 6 // Research in Brief // 6RIB1

Previous Article Issue Contents Previous Article

Communicating the Handling of Nonresponse Error in Journal of Extension Research in Brief Articles

Abstract
This article reports a study designed to describe historical treatment of nonresponse error in the Journal of Extension. All Research in Brief articles (N=83) published in JOE (1995-99) were analyzed using content analysis techniques. Results showed that not mentioning nonresponse error, not controlling nonresponse error, or not citing the literature were the norm and not the exception. It is recommended that Extension researchers address nonresponse error when it is a threat to the external validity of their study. Recommendations for additional study and adoption of methods for handling nonresponse are provided.


James R. Lindner
Internet Address: j-lindner@tamu.edu

Gary J. Wingenbach
Internet Address: g-wingenbach@tamu.edu

Department of Agricultural Education
Texas A&M University
College Station, Texas


Introduction

How can social science researchers improve the criteria, standards, and level of rigor of scholarship reported in the Journal of Extension (Norman, 2001)? Scholarship. A single word that strikes fear or revere in the hearts of many agricultural and Extension professionals when communicating their research to peers and the public.

Social science professionals realize the very nature of reporting quality-laden research lies in the "equality" of said research when viewed by our colleagues in the hard sciences. Social scientists must strive to assure our peers that research conducted within our discipline is characterized by similar methods and protocols as practiced in the hard sciences. One important step in achieving this task is to confront the issue of nonresponse error in social science survey research.

Scholarship in the Journal of Extension (JOE) is elucidated further as the creative work that is validated by peers and communicated to the profession and the general public (Weiser, 1996, 1998). In a study by Weiser, five forms of scholarship expanded upon the earlier work by Boyer (1990). Weiser included Boyer's original scholarship forms (discovery, integration, and application), changed the teaching form to learning and teaching, and added creative artistry as the fifth type of scholarship.

These forms of scholarship cannot adequately address what constitutes scholarship for Extension professionals; if they do, then most all faculty members' activities can be considered scholarly endeavors. Unfortunately, what constitute scholarly works are the criteria, standards, and level of rigor (Norman, 2001) when reviewed and evaluated by peers in the hard sciences, especially when these scholarly works are being assessed for promotion and tenure decisions. Social science researchers therefore, must reconsider addressing at least one aspect of their research methodology, the issue of handling nonresponse error in survey research.

Nearly 20 years ago, Miller and Smith (1983) published the bellwether article regarding the treatment of nonresponse error in survey research. The article, published in JOE, illustrated five generally accepted methods for handling nonresponse error that threaten the external validity of studies employing sampling techniques. Such efforts to improve our research methods are necessary to ensure the objectivity and vigor of research. Miller (1998) noted that "numerous improvements can be made in our research" (p.10) and suggested that the profession continue to devote personal time to renewing, maintaining, and improving our ability to use appropriate research methods and techniques.

Improving research in agricultural and Extension education requires a periodic examination of research methods and techniques. In taking a step forward with this critical review of handling nonresponse error, it behooves us to recall the scholarship questions posed by Miller and Sandman (2000): "How do we assure scholarly standards? " and "How can we assure that new entrants to the field are professionally socialized to contribute to scholarship?" (p. 39).

As JOE board members rethink and reconsider the journal's criteria, standards, and level of rigor to redefine scholarship for Extension (Norman, 2001), a need exists to demonstrate research relevance to both higher education and the public. The results of this study provide information that may be useful in this debate.

Purpose

The purpose of the inquiry whose results are reported here was to explore and describe the treatment of nonresponse error in the Journal of Extension Research in Brief articles for the years 1995 through 1999.

Specific objectives included describing:

  1. The types of sampling procedures used in JOE Research in Brief articles.
  2. Response rates.
  3. How often nonresponse error was mentioned, how it was controlled, and results from attempts to control.
  4. Literature cited in handling nonresponse error.

Methods

All Research in Brief articles (N = 83) published in the Journal of Extension from 1995 through 1999 were analyzed using content analysis techniques (Fraenkel & Wallen, 1999). Data were analyzed using SPSS. The instrument, developed by Lindner, Murphy, and Briers (2001), used seven coding categories to gather data.

Article types were coded as sampling procedures (presence or absence), while response rate was coded as actual rate achieved. Mentioning of nonresponse error as a possible threat to external validity was coded as mentioned nonresponse, did not mention nonresponse, and a 100% response rate achieved. How nonresponse error was handled was coded into categories proposed by Miller and Smith (1983). Literature cited was coded by actual reference to the literature. Efforts to control for nonresponse errors were coded as no differences found, differences found, or did not indicate results. Sampling procedures used were coded in one of nine categories.

Each article was independently read and analyzed by two of the researchers. Researcher analyses of the data were entered onto the data collection instrument. To establish reliability of the instrument, results between researchers were compared to determine discrepancies between researchers. Less than one discrepancy per issue existed. When discrepancies existed, the two researchers, working together, reanalyzed the data and agreed on the correct code.

Findings

Objective One

Eighty-two Research in Brief articles were published in JOE during 1995-1999. Approximately 74% (N = 61) of these articles used sampling procedures. As revealed in Table 1, sampling procedures used most often were census (29.5%), convenience (23.0%), and purposive (16.4%). Sampling procedures used the least were cluster (4.9%) and Delphi (1.6%). Three articles did not report their sampling procedures.

Table 1.
Sampling Procedures Used in Research in Brief Articles Published in the Journal of Extension (N=61)

Sampling Procedure

n

%

Census

18

29.5

Convenience Sampling

14

23.0

Purposive Sampling

10

16.4

Simple Random Sampling

7

11.5

Stratified Sampling

5

8.2

Not Reported

3

4.9

Cluster Sampling

3

4.9

Delphi Sampling

1

1.6

Total

61

100.0

Objective Two

Table 2 shows response rates of studies whose results were published. The average response rate was 71.5% (SD = 22.9), with a minimum response rate of 14% and a maximum of 100%. Approximately 18% of the studies reported that 100% response rate was achieved, while about 15% of the studies reported response rates of less than 50%. Almost 20% of the studies did not report a response rate.

Table 2.
Response rate of Research in Brief articles published in the Journal of Extension (N=61)

Response Ratea

n

%

100%

11

18.0

90 - 99%

4

6.6

80 - 89%

4

6.6

70 - 79%

8

13.1

60 - 69%

8

13.1

50 - 59%

6

9.8

Less than 50%

9

14.8

Did not report response rate

11

18.0

Total

61

100.0

Note: aMean=71.5%; SD=22.9; Min=14%; Max=100%

Objective Three

Table 3 shows that about 20% of JOE articles mentioned nonresponse error as a potential threat to external validity. For almost 20% of these articles, nonresponse error was not a threat to external validity because of a 100% response rate. About 60% of JOE articles did not mention nonresponse error as a potential threat to external validity. Of the 50 articles, nonresponse was a threat to external validity in 82% of the studies.

No attempts were made to control for nonresponse error in 40 of the 50 articles. In six of these articles, JOE authors handled nonresponse error by comparing early to late respondents. In the remaining four articles, authors attempted to control for nonresponse error by following up with nonrespondents. In the 10 articles where nonresponse was handled, no differences between respondents and nonrespondents or differences in early/late responses or respondents/nonrespondents were reported in any of the articles.

Table 3.
Frequency That Nonresponse Error as a Potential Threat to External Validity Was Mentioned in Research in Brief Articles Published in the Journal of Extension

Factor

n

%

n

%

Less than 100% response rate achieved

50

82.0

 
 

       Mentioned nonresponse

13

21.3

13

35.1

       Did not mention nonresponse

37

60.7

37

64.9

              Nonresponse a threat to external validity

50

82.0

50

100.0

100% response rate achieved

11

18.0

 
 

       Mention of nonresponse not necessary

11

18.0

11

100.0

       Nonresponse not a threat to external validity

11

18.0

11

100.0

Grand Total       

61

100.0

 
 

Objective Four

A reference citation for the appropriate handling of nonresponse error was not provided in 47 of the 50 articles where nonresponse error was a potential threat to external validity. Three articles (6.0%) cited Miller and Smith (1983) as a source for handling nonresponse error.

Conclusions

Based on the results of this study, the following conclusions are drawn. To ensure the external validity or generalizability of research findings to the target population, researchers must satisfactorily answer the question of whether the results of the survey would have been the same even if a 100% response rate had been achieved (Richardson, 2000).

Seven different general sampling procedures were used to collect data for the 61 Research in Brief articles published in the Journal of Extension. Nonresponse error can be a threat to the external validity of a study when any of these sampling procedures are used and less than 100% response rate is achieved. A 100% response rate was achieved in 11 of the articles published in JOE. Nonresponse, therefore, was a potential threat to external validity in 50 articles. In approximately 60% of these 50 articles, nonresponse error, as a potential threat to external validity, was not mentioned. In 80% of these 50 articles, no attempts to control for nonresponse were mentioned. The external validity of those findings is, therefore, unknown.

Of the articles attempting to do so, nonresponse error was treated primarily by comparing early to late respondents or by comparing respondents with a sample of nonrespondents. A total of three reference citations were provided in explaining how nonresponse error was handled. During the 5 years of JOE Research in Brief articles addressed in this article, no differences were found to exist between early and late respondents or between respondents and nonrespondents. Early respondents were similar to late respondent, and respondents were similar to nonrespondents.

As noted throughout this article, not mentioning nonresponse error as a threat to external validity of a study, not attempting to control for nonresponse error, or not providing a reference to the literature were unfortunately the norm and not the exception. To ensure external validity of research findings, statistically sound and professionally acceptable procedures and protocols for handling nonresponse error are needed and should be reported. The authors recommend a follow-up study of the handling of nonresponse error in the Journal of Extension in 5 years to describe the reliability and validity of the recommended procedures. Also recommended is a replication of this study for articles published in other scholarly publications and with other professions to describe the generalizability of these findings to other populations and the applicability of recommendations.

Recommendations for Handling Nonresponse

Future Research in Brief articles reported in JOE should, when applicable, include how nonresponse error was handled. Based on the findings of this study and the review of literature, the authors conclude a need exists for Extension researchers to better address nonresponse error when it is a threat to the external validity of a study. Three methods for handling nonresponse errors proposed by Lindner, Murphy, and Briers (2001) are:

  1. Comparison of early to late respondents,
  2. Using "days to respond" as a regression variable, and
  3. Comparison of respondents to nonrespondents.

Lindner, Murphy, and Briers suggested that procedures for handling nonresponse issues be implemented when less than an 85% response rate is achieved. To further reduce the threat of nonresponse error, it is recommended that a minimum response rate of 50% be achieved (L. E. Miller, personal communication, December 12, 2001; Fowler, 2001; Babbie, 1990).

Method 1--Comparison of Early to Late Respondents . . . . One technique to operationally define late respondents is based on responses generated by "successive waves of a questionnaire . . . . So, we recommend that late respondents should be defined operationally as those who respond in the last wave of respondents in successive follow-ups to a questionnaire . . . . If the last stimulus does not generate 30 or more responses, the researcher should "back up" and use responses to the last two stimuli as his or her late respondents. Comparison, then, would be made between early and late respondents on primary variables of interest. Only if no differences are found should results be generalized to the target population . . . . If respondents cannot be categorized by successive waves or if a wave of 30 respondents cannot be defined by successive stimuli, then we recommend that late respondents be defined operationally and arbitrarily as the later 50% of the respondents.

Method 2--Using "Days to Respond" as a Regression Variable . . . ."Days to respond" is coded as a continuous variable, and used as an independent variable in regression equations in which primary variables of interest are regressed on the variable "days to respond . . . ." If the regression model does not yield statistically significant results, it can be assumed that nonrespondents do not differ from respondents.

Method 3--Compare Respondents to Nonrespondents . . . . Comparisons between respondents and nonrespondents and differences found should be handled by sampling nonrespondents, working extra diligently to get their responses, and then comparing their responses to other previous respondents. A minimum of 20 responses from a random sample of nonrespondents should be obtained. If fewer than 20 nonrespondents are obtained, their responses could be combined with other respondents and used in conjunction with method 1 or 2. (p. 51-52)

Extension professionals who diligently adhere to one of the aforementioned methods for handling nonresponse error in their future social science surveys will contribute to improving the criteria, standards, and level of research rigor in our profession. Eventually, our colleagues in the hard sciences will realize that our collective creative works are truly scholastic, contribute new knowledge, and provide valuable information to society. Due diligence in addressing nonresponse error is a necessary component of reporting quality-laden research and is something all current and future social scientists in Extension must pay attention to if they want their efforts to be viewed as scholarly.

References

Babbie, E. R. (1990). Survey research methods (2nd ed.). Belmont, CA: Wadsworth.

Boyer, E. L. (1990). Scholarship reconsidered--Priorities of the professorate. Princeton, NJ: The Carnegie Foundation for the Advancement of Teaching.

Fowler, F. J., Jr. (2001). Survey research methods (3rd ed.). Thousand Oaks, CA: Sage.

Fraenkel, J. R., & Wallen, N. E. (1999). How to design and evaluate research in education (3rd ed.). New York: McGraw-Hill

Lindner, J. R., Murphy, T. H., & Briers, G. E. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42(4), 43-53.

Miller, L. E. (1998). Appropriate analysis. Journal of Agricultural Education, 39(2), 1-10.

Miller, L. E., & Sandman, L. (2000). A coming of age: Revisiting AIAEE scholarship. Journal of International Agricultural and Extension Education, 7(2), 38-44.

Miller, L. E., & Smith, K. L. (1983). Handling nonresponse issues. Journal of Extension [On-line], 21(5). Available at: http://www.joe.org/joe/1983september/83-5-a7.pdf (pdf)

Norman, C. L. (2001). The challenge of Extension scholarship. Journal of Extension [On-line], 39(1). Available at: http://www.joe.org/joe/2001february/comm1.html

Richardson, A. J. (2000). Behavioral mechanisms of non-response in mailback travel surveys. Paper presented at the 79th Annual Meeting of the Transportation Research Board, Washington, DC.

Weiser, C. J. (1996). The value of a university--Rethinking scholarship. Oregon State University [On-line]. Available at: http://www.adec.edu/clemson/papers/weiser.html

Weiser, C. J., & Houglum, L. (1998).Scholarship unbound for the 21st century. Journal of Extension [On-line], 36(4). Available at: http://www.joe.org/joe/1998august/a1.html