April 2008 // Volume 46 // Number 2 // Tools of the Trade // 2TOT1
Strategies for Generalizing Findings in Survey Research
Abstract
Surveys are the most common method of data collection used by Extension professionals, agricultural educators, and researchers engaged in social and behavioral sciences. Rarely do they get 100% response for their studies. Then, the question is what do you do to enhance the external validity of your study? Following certain procedures, making logical choices, and providing clear explanations, you can enhance the external validity of your study. This article suggests strategies to enhance the external validity of your study. It is hoped that Extension professionals can benefit from this and interpret results of their research with both clarity and caution.
Background
Extension professionals and survey researchers are concerned about generalizing findings because of low response rate obtained in surveys. Several researches have examined the issue of low response rates in surveys and its impact on generalizing findings. According to Wiseman (2003), findings obtained in surveys that have low response are questionable because little is know about whether non-respondents differ from respondents (p.1).
Lindner and Wingenbach (2002), in their review of Research in Brief articles published in the Journal of Extension from 1995 through 1999, found that non-response error was a threat to external validity in 82% of the articles. Further, they also found that in 80% of the articles, no attempts to control for non-response were mentioned. Shinn, Baker, and Briers (2007) studied response patterns in E-mail surveys and concluded that response rate issues in survey research are complex and multifaceted (p.8) and are likely to be influenced by a variety of factors.
What do you do to enhance the external validity (to whom your findings/results apply) of your study? And to whom can you generalize the findings of the study? Can we generalize to the population, the sample, the subjects who responded and/or provided complete data, or should we generalize the findings at all? These are the frequent questions faced by Extension professionals, faculty, graduate students, and researchers engaged in survey research.
However, if you follow certain procedures in handling the data, you can enhance the external validity of your study. Here are some key points to consider about generalizing the findings in survey research:
- The population and sample;
- Response rates;
- Comparison of early, late, and non-respondents; and
- The results of comparison.
Population or Sample
Determine how the subjects for the study were selected. Is it a census or a sample? What type of sample was used: random (probability) or non random (non-probability)? Knowing how the subjects were selected will help determine whether or not we can generalize the findings.
Response Rate
The next step is to know how many subjects responded to the survey. Calculate the response rate of the survey. Suppose you get 100% response, the question of generalizing the findings does not arise because everyone responded. But if it is a sample, you may have to generalize to a population. That is not always the case. You rarely get 100% response. If you get 68% response, you should question why the other 32% did not respond to the survey. Are the data valid only for 68% of the subjects who responded, or are there procedures that you can use a 68% return rate for generalizing the findings to the remaining 32%? Miller and Smith (1983) and recently Lindner, Murphy, and Briers (2001) provide answers to this dilemma.
Comparing Early, Late, and Non-Respondents
Identify subjects who responded to the first mailing within the deadline date, and label them as early. Similarly, identify all other subjects who responded to subsequent mailings, and label them as late. After the data collection is complete, identify and label the non-respondents. According to Miller and Smith (1983), non-respondents tend to be similar to late respondents in responding to surveys. Therefore, compare the early and late respondent groups on key variables (Figure 1). If you find no significant differences between early and late respondents, you can statistically conclude that non-respondents are perhaps similar to late respondents and thus generalize the findings to the population. The other accepted procedure is to follow-up with a telephone call to 15-20% of the non-respondents, and collect data from them on key variables. Then do a comparison between early and late, early and non-respondents, and late and non-respondents.
Figure 1.
Logic of Comparing Early, Late,
and Non-Respondents
If the comparison indicates no differences between these three groups of respondents, then you can generalize the findings to the population. On the other hand, if you find significant differences, you cannot generalize the findings to the population. Therefore, use your judgment as to whether or not to include the differed variables for final analysis. Normally, the differed variables are eliminated from further analysis. Explain why the subjects differed on the key variables. Otherwise, provide justification for including the differed variables in the final analysis.
Use independent t-test to compare early and late respondents; early and non-respondents; and late and non-respondents. Use ANOVA if you want to compare all three (early, late, and non respondents) response types and conduct a post-hoc analysis to determine group (early, late, and non respondents) differences.
Lindner, Murphy, and Briers (2001) suggested using "days to respond" as a regression variable for handling non-response error. In the SPSS program, create a variable for "days to respond," and code them as a continuous variable (for example, 5, 20, 32 days, etc.). Use the "days to respond" in the regression equation in which primary variables of interest are regressed on the "days to respond" variable. "If the regression model is not significant, then it can be assumed that non-respondents do not differ from respondents" (Lindner, Murphy, & Briers, 2001, p.6). Table 1 summarizes the strategies for generalizing findings in survey research.
Sample Type | Compared Early/Late/Non-Respondents | Results of Comparison | Generalize Findings To |
Census | No | - | Only to those responded |
Census | Yes | No difference | The Census (All) |
Random sample | No | - | Population* |
Random sample | Yes | No difference | Population |
Non-random sample | - | - | Cannot generalize |
* somewhat limits the external validity of the study |
Conclusions and Implications
Following certain accepted procedures, making logical choices, and providing appropriate explanations and justifications for following non-respondents would help describe the data accurately and thus enhance the external validity of the study. Extension professionals, survey researchers, and graduate students would benefit from following sound research methods and procedures.
Using strategies suggested in this article and describing procedures used to handle non-response error will not only enhance the external validity of the study, but also improve the criteria, standards, and level of rigor in research carried out by Extension professionals. The decision to generalize research findings to the accessible population or general population needs to be clarified in research articles so that readers could interpret results with caution and for the purpose of replicating these studies in similar and/or other settings.
References
Lindner, J. R., & Wingenbach, G. J. (2002). Communicating the handling of nonresponse error in Journal of Extension Research in Brief Articles. Journal of Extension [On-line], 40(6). Available at: http://www.joe.org/joe/2002december/rb1.shtml
Lindner, J. R, Murphy, T. H., & Briers, G. E. (2001). Handling non-response in social science research. Journal of Agricultural Education, 42(4), 43-53.
Miller L. E., & Smith, K. (1983). Handling non-response issues. Journal of Extension On-line, 21(5). Available at: http://www.joe.org/joe/1983september/83-5-a7.pdf
Shinn, G., Baker, M., & Briers, G. (2007). Response patterns: Effect of day of receipt of an E-mailed survey instrument on response rate, response time, and response quality. Journal of Extension [On-line], 45(2) Article 2RIB4. Available at: http://www.joe.org/joe/2007april/rb4.shtml
Wiseman, F. (2003). On the reporting of response rates in Extension Research. Journal of Extension On-line 41(3), Available at: http://www.joe.org/joe/2003june/comm1.shtml