June 1995 // Volume 33 // Number 3 // Research in Brief // 3RIB3

Previous Article Issue Contents Previous Article

Computer-administered Surveys in Extension

Abstract
The purpose of this study was to determine the feasibility of using a computer-administered survey to gather data from Montana Extension Service (MES) personnel. The electronic survey did not adversely effect the return or responses. A majority of surveys in this study were returned via e-mail. However, one-third of the respondents were uncomfortable with the electronic survey despite prior e-mail training. Computer-administered surveys are a cost effective, speedy, and highly accurate method of collecting data that should be utilized by the Cooperative Extension Service.


Jodee L. Kawasaki
Assistant Professor
Montana State University Libraries
Bozeman, Montana
Internet address: alijk@gemini.oscs.montana.edu

Matt R. Raven
Assistant Professor
Agricultural Education and Experimental Statistics
Mississippi State University
Internet address: raven@ra.msstate.edu


Introduction and Theoretical Framework

The Extension Service (ES) has always been in the business of getting people to apply new knowledge and make use of information. However, it was reported in the Future Application of Communication Technology report (Extension Service-United States Department of Agriculture & Extension Committee on Organization and Policy [ES-USDA & ECOP], 1992) that there is a need to increase staff knowledge and skills in communication and information technologies. Computers are here to stay; they will become an important tool to help the ES assimilate as well as disseminate knowledge. Shill (1992) noted that:

     The agriculture information dissemination infrastructure is
     in a state of significant transition.  Traditional
     institutions, such as the state agricultural Extension
     services, have been forced to adapt to the emergence of
     electronic dissemination channels while still making active
     use of print and face-to-face communication mechanisms (p.
     313).

Computer-administered surveys are an example of an information technology that agents, specialists, and other Extension educators could use to gather data currently collected using mailed questionnaires. Electronic surveys can be used to reveal the behavior of people who use computers as a communication mode. Most of the literature focuses on surveys programmed on one computer terminal that one person uses at a time. However, some researchers have used a computer-administered survey which is sent simultaneously by e-mail to multiple computer users (Sproull, 1986).

Sproull (1986) found that an e-mail survey produced higher response rates at a lower cost than either paper questionnaires or one-on-one interviews. The average time for responses to electronic surveys was half as long as conventional surveys. Sproull recommended sending letters before the survey with a signature endorsement on letterhead to improve status and legitimacy. Follow-ups were sent via e-mail instead of the traditional mail postcards. One drawback Sproull encountered was that in an e-mail survey, respondents must be motivated to respond. Kiesler and Sproull (1986) found that electronic surveys lacked social context information. Another drawback discovered by Sproull (1986), Kiesler and Sproull (1986), Rosenfeld, Booth-Kewley, and Edwards (1993), as well as Rosenfeld, Doherty, Vicino, Kantor, and Greaves (1989) was that computer-administered surveys are limited by organizational locations, computer equipment, and different networks. The respondents must be familiar with and have access to an electronic mail system.

Rosenfeld et al. (1993) felt that if an organization was linked to an existing e-mail system such as BITNET or the Internet, then it would be possible to conduct a low-cost electronic survey. They recommended computer surveys as a preferred mode for sample sizes of 500 or less. Additionally, the response rate of surveys administered either by computer or paper were nearly identical, and the reliability of psychological and organizational scales was internally consistent.

Rosenfeld et al. (1989) studied three different microcomputer systems effects on an electronic survey. They found that a computerized survey "administered on virtually any type of computer in general use today can produce...responses at least as reliable and valid as would be obtained if paper and pencil were used" (p. 153).

The literature clearly provides support for the use of computer-administered surveys. The electronic survey will be a future research methodology. However, the use of computer-administered surveys with Extension Service studies and audiences has not been investigated. The question arises: Can computer-administered surveys be used to collect data from agents, specialists, and other ES stakeholders regarding Extension related issues.

Purpose and Objectives

The purpose of this study was to determine the feasibility of using a computer-administered survey to gather data from ES personnel. The following research questions were identified.

1. Were there differences by demographic factors of Montana Extension Service (MES) professionals in their method of returning the survey (e-mail or regular mail)?

2. Were there differences between response dates by method of returning the survey (e-mail or regular mail)?

3. Were there differences between MES agents and specialists in their comfort level in answering an electronic survey?

Methods and Procedures

Population and Sample

The population for this descriptive census study consisted of specialists and county agents employed by the Montana Extension Service during the 1993-94 academic year (N = 116). The population frame was determined by using the County Extension Agents Directory, printed by the MES Office of the Director in September 1993.

Instrument Design

The basic design for the survey instrument followed Dillman's (1978) Total Design Method (TDM). Because Dillman's TDM deals with mailed surveys and not computer-administered surveys, trial surveys were sent by e-mail to members of the researcher's graduate committee to determine procedures that would be needed. Another trial test was conducted with Reference Librarians at the Montana State University Library. The librarians all use e-mail, but with a range of e-mail skills among the group. Both first and second trial tests helped to determine format and layout of the instrument as well as face and content validity.

The format of the instrument consisted of rating the e-mail related competencies' importance on the left-hand side of the computer screen and rating respondent's knowledge of the competency on the right-hand side of the computer screen using a five-point Likert-type scale. This structure was based on Borich's (1980) needs assessment model. Importance ratings minus knowledge ratings yields a discrepancy score for each competency from each respondent. The discrepancy scores were then weighted by multiplying the average level of importance by the discrepancy score of each competency. Demographic questions included gender, age, position, education level, and professional experience. The instrument was pilot tested with 12 participants (four were specialists, and eight were agents). Reliability coefficients were calculated for appropriate sections of the instrument. Cronbach's alpha ranged from .98 to .91.

Data Collection and Analysis

Following recommendations made by Sproull (1986), Rosenfeld et al. (1993), Booth-Kewley, Edwards, and Rosenfeld (1992), and Rosenfeld et al. (1989), the researcher sent a hard copy of the cover letter with relevant signatures and a set of directions to the survey population. The e-mail survey followed three days later. Assurance of the needed return rate was provided through two follow-up e-mail messages to non-respondents. The double dip technique was employed to assure non-respondents were no different than other respondents. Of the non-respondents, 10% were pooled, which came to two people. One person was chosen from the agents stratum and one from the specialists stratum. There was no difference between respondents and double dip non-respondents statistically.

Responses from the survey were entered into a database using DBASE III. The personal computer version of the Statistical Package for the Social Sciences (SPSS/PC+) was used for analyses (Norusis & SPSS, 1988). Frequencies, means, and standard deviations were run for the weighted discrepancy (WD) scores of each e-mail competency for each stratum. Because the study was a census, only descriptive statistics were used.

Results and Findings

Method of Return by Demographics

The data showed that the agent stratum had 40 (60.6%) respondents return the survey via e-mail and 26 (39.4%) through regular postal mail. In contrast the specialists returned 13 (43.3%) of the surveys via e-mail and 17 (56.7%) by regular mail. Age, gender, education level, or professional experience did not influence the method used by MES agents or specialists returning the survey.

Method of Return by Response Date

Respondents were classified as early, middle, or late respondents. Of the early respondents, prior to any follow-up messages, 28 (68.3%) surveys were returned through e-mail and 13 (31.7%) surveys arrived by postal mail. The middle returns (surveys received between the two follow-ups) had 18 (51.5%) come back through e-mail and 17 (48.5%) return by regular mail. Late returns (after the follow-ups) included seven (35%) surveys by e- mail and 13 (65%) by postal mail. For the purpose of this study the late returns included the double dip respondents.

Comfort Level

One-third of the total respondents (32) were not comfortable responding to an electronic survey. One-third of the agents (22) as well as one-third of the specialists (10) were not comfortable responding to the survey via electronic means. Approximately one-third of the agents (21) and specialists (9) were less to somewhat comfortable with the electronic survey. A third of the agents (23) and specialists (11) were more to very comfortable responding to the survey through e-mail. The mean of respondent's comfort level was 2.81 on a scale of 1 to 5 (2.81 for the agents and 2.8 for the specialists), which indicates overall respondents were less comfortable using e-mail.

Conclusions and Recommendations

Even though MES professionals had prior e-mail training, the data suggest they need more training. This conclusion is based on the number of surveys (45%) returned through postal mail. Additionally, one-third were not comfortable answering the survey via e-mail. Surprisingly, campus specialists who have easier and less costly access to e-mail returned the surveys by e-mail less readily than agents. In addition, a much smaller percentage of surveys were returned by postal mail and there was no difference between respondents based on the method of returning the survey. These conclusions support the Future Application of Communication Technology's report findings (ES-USDA & ECOP, 1992). Further training in e-mail competencies is a must for MES professionals to be successful with electronic correspondence.

Early respondents used e-mail more readily than late respondents. By percentage, the early and late respondents reversed the method of returning the survey. One might assume that the late respondents represent the "Laggards" of adopting e- mail correspondence. The Extension Administrators could mandate that certain types of correspondence be done only via e-mail among the Extension professionals. This would increase the amount of time "Laggards" use e-mail, thus becoming more familiar with the e-mail system.

More training would increase the comfort level in answering an electronic survey. Training needs to focus on different topics such as system protocols, e-mail etiquette, potential uses of e-mail other than the basic memo or letter correspondence, and security concerns.

Further research is needed in the development of training and update sessions for Extension professionals to use e-mail. For example, even though the personnel in this study all had previous e-mail training, one-third were still uncomfortable with this technology. Additionally, studies need to be conducted with Extension clientele to determine if e-mail could be used by the Extension Service to collect data to help plan programs.

The findings support the conclusions of Sproull (1986), Rosenfeld et al. (1989), Booth-Kewley et al. (1992), and Rosenfeld et al. (1993) that electronic surveys did not adversely effect the return rate or responses. A majority of surveys in this study were returned via e-mail. Additionally, e-mail responses were similar to postal responses (Kawasaki, 1994). Computer-administered surveys are a cost effective, speedy, and highly accurate method of collecting data that should be utilized by the Cooperative Extension Service.

References

Booth-Kewley, S., Edwards, J. E., & Rosenfeld, P. (1992). Impression management, social desirability, and computer administration of attitude questionnaires: Does the computer make a difference? Journal of Applied Psychology, 77(4), 562-566.

Borich, G. D. (1980). A needs assessment model for conducting follow-up studies. Journal of Teacher Education, 31(3), 39-42.

County Extension Agents. (1993). Bozeman, MT: Extension Service, Office of the Director, Montana State University.

Dillman, D. A. (1978). Mail and telephone survey: The total design method. New York: Wiley.

Extension Service-U.S. Department of Agriculture, & Extension Committee on Organization and Policy. (1992). F-A-C-T future application of communication technology: Strategic information plan for the Cooperative Extension system. Washington, DC: Extension Service-U.S. Department of Agriculture, Communication, Information & Technology.

Kawasaki, J. L. (1994). Information-related competencies for Montana Extension Service professionals. Unpublished master's thesis, Montana State University, Bozeman.

Kiesler, S., & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402-413.

Norusis, M. J., & SPSS, Inc. (1988). SPSS/PC+ V2.0 Base Manual for the IBM PC/XT/AT and PS/2 [computer program manual]. Chicago: SPSS, Inc.

Rosenfeld, P., Booth-Kewley, S., & Edwards, J. E. (1993). Computer-administered surveys in organizational settings: Alternatives, advantages, and applications. American Behavioral Scientist, 36(4), 485-511.

Rosenfeld, P., Doherty, L. M., Vicino, S. M., Kantor, J., & Greaves, J. (1989). Attitude assessment in organizations: Testing three microcomputer-based survey systems. Journal of General Psychology, 116(2), 145-154.

Shill, H. B. (1992). Information 'publics' and equitable access to electronic government information: The case of agriculture. Government Information Quarterly, 9(3), 305-322.

Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29(1), 156-169.