February 2007 // Volume 45 // Number 1 // Tools of the Trade // 1TOT2

Previous Article Issue Contents Previous Article

Tips for Developing and Testing Questionnaires/Instruments

Abstract
Questionnaires are the most widely used data collection methods in educational and evaluation research. This article describes the process for developing and testing questionnaires and posits five sequential steps involved in developing and testing a questionnaire: research background, questionnaire conceptualization, format and data analysis, and establishing validity and reliability. Systematic development of questionnaires is a must to reduce many measurement errors. Following these five steps in questionnaire development and testing will enhance data quality and utilization of research.


Rama B. Radhakrishna
Associate Professor
The Pennsylvania State University
University Park, Pennsylvania
Brr100@psu.edu


Introduction

Questionnaires are the most frequently used data collection method in educational and evaluation research. Questionnaires help gather information on knowledge, attitudes, opinions, behaviors, facts, and other information. In a review of 748 research studies conducted in agricultural and Extension education, Radhakrishna, Leite, and Baggett (2003) found that 64% used questionnaires. They also found that a third of the studies reviewed did not report procedures for establishing validity (31%) or reliability (33%). Development of a valid and reliable questionnaire is a must to reduce measurement error. Groves (1987) defines measurement error as the "discrepancy between respondents' attributes and their survey responses" (p. 162).

Development of a valid and reliable questionnaire involves several steps taking considerable time. This article describes the sequential steps involved in the development and testing of questionnaires used for data collection. Figure 1 illustrates the five sequential steps involved in questionnaire development and testing. Each step depends on fine tuning and testing of previous steps that must be completed before the next step. A brief description of each of the five steps follows Figure 1.

Figure 1.
Sequence for Questionnaire/Instrument Development

Sequence for
Questionnaire/Instrument Development

Step 1--Background

In this initial step, the purpose, objectives, research questions, and hypothesis of the proposed research are examined. Determining who is the audience, their background, especially their educational/readability levels, access, and the process used to select the respondents (sample vs. population) are also part of this step. A thorough understanding of the problem through literature search and readings is a must. Good preparation and understanding of Step1 provides the foundation for initiating Step 2.

Step 2--Questionnaire Conceptualization

After developing a thorough understanding of the research, the next step is to generate statements/questions for the questionnaire. In this step, content (from literature/theoretical framework) is transformed into statements/questions. In addition, a link among the objectives of the study and their translation into content is established. For example, the researcher must indicate what the questionnaire is measuring, that is, knowledge, attitudes, perceptions, opinions, recalling facts, behavior change, etc. Major variables (independent, dependent, and moderator variables) are identified and defined in this step.

Step 3--Format and Data Analysis

In Step 3, the focus is on writing statements/questions, selection of appropriate scales of measurement, questionnaire layout, format, question ordering, font size, front and back cover, and proposed data analysis. Scales are devices used to quantify a subject's response on a particular variable. Understanding the relationship between the level of measurement and the appropriateness of data analysis is important. For example, if ANOVA (analysis of variance) is one mode of data analysis, the independent variable must be measured on a nominal scale with two or more levels (yes, no, not sure), and the dependent variable must be measured on a interval/ratio scale (strongly agree to strongly disagree).

Step 4--Establishing Validity

As a result of Steps 1-3, a draft questionnaire is ready for establishing validity. Validity is the amount of systematic or built-in error in measurement (Norland, 1990). Validity is established using a panel of experts and a field test. Which type of validity (content, construct, criterion, and face) to use depends on the objectives of the study. The following questions are addressed in Step 4:

  1. Is the questionnaire valid? In other words, is the questionnaire measuring what it intended to measure?

  2. Does it represent the content?

  3. Is it appropriate for the sample/population?

  4. Is the questionnaire comprehensive enough to collect all the information needed to address the purpose and goals of the study?

  5. Does the instrument look like a questionnaire?

Addressing these questions coupled with carrying out a readability test enhances questionnaire validity. The Fog Index, Flesch Reading Ease, Flesch-Kinkaid Readability Formula, and Gunning-Fog Index are formulas used to determine readability. Approval from the Institutional Review Board (IRB) must also be obtained. Following IRB approval, the next step is to conduct a field test using subjects not included in the sample. Make changes, as appropriate, based on both a field test and expert opinion. Now the questionnaire is ready to pilot test.

Step 5--Establishing Reliability

In this final step, reliability of the questionnaire using a pilot test is carried out. Reliability refers to random error in measurement. Reliability indicates the accuracy or precision of the measuring instrument (Norland, 1990). The pilot test seeks to answer the question, does the questionnaire consistently measure whatever it measures?

The use of reliability types (test-retest, split half, alternate form, internal consistency) depends on the nature of data (nominal, ordinal, interval/ratio). For example, to assess reliability of questions measured on an interval/ratio scale, internal consistency is appropriate to use. To assess reliability of knowledge questions, test-retest or split-half is appropriate.

Reliability is established using a pilot test by collecting data from 20-30 subjects not included in the sample. Data collected from pilot test is analyzed using SPSS (Statistical Package for Social Sciences) or another software. SPSS provides two key pieces of information. These are "correlation matrix" and "view alpha if item deleted" column. Make sure that items/statements that have 0s, 1s, and negatives are eliminated. Then view "alpha if item deleted" column to determine if alpha can be raised by deletion of items. Delete items that substantially improve reliability. To preserve content, delete no more than 20% of the items. The reliability coefficient (alpha) can range from 0 to 1, with 0 representing an instrument with full of error and 1 representing total absence of error. A reliability coefficient (alpha) of .70 or higher is considered acceptable reliability.

Conclusions

Systematic development of the questionnaire for data collection is important to reduce measurement errors--questionnaire content, questionnaire design and format, and respondent. Well-crafted conceptualization of the content and transformation of the content into questions (Step 2) is inessential to minimize measurement error. Careful attention to detail and understanding of the process involved in developing a questionnaire are of immense value to Extension educators, graduate students, and faculty alike. Not following appropriate and systematic procedures in questionnaire development, testing, and evaluation may undermine the quality and utilization of data (Esposito, 2002). Anyone involved in educational and evaluation research, must, at a minimum, follow these five steps to develop a valid and reliable questionnaire to enhance the quality of research.

References

Esposito, J. L. (2002 November). Interactive, multiple-method questionnaire evaluation research: A case study. Paper presented at the International Conference in Questionnaire Development, Evaluation, and Testing (QDET) Methods. Charleston, SC.

Groves, R. M., (1987). Research on survey data quality. Public Opinion Quarterly, 51, 156-172.

Norland-Tilburg, E. V. (1990). Controlling error in evaluation instruments. Journal of Extension, [On-line], 28(2). Available at http://www.joe.org/joe/1990summer/tt2.html

Radhakrishna, R. B. Francisco, C. L., & Baggett. C. D. (2003). An analysis of research designs used in agricultural and extension education. Proceedings of the 30th National Agricultural Education Research Conference, 528-541.