Winter 1985 // Volume 23 // Number 4 // Feature Articles // 4FEA1
Does In-Service Make a Difference?
Abstract
Do agents learn and then use what they learn?
Why do agents in Ohio spend up to 10 days a year out of the county attending workshops, symposiums, and other in-service activities? Why do supervisors encourage agents to attend in-service activities? Is attendance at in-service activities worth the time away from the clientele? We have always assumed that in-service is necessary to remain competent and confident, but do data exist to support this assumption?
Why Do In-Service?
Peters and Waterman, in the book In Search of Excellence, inform us that almost all successful companies engage in intensive in-service training.1 IBM uses 15 months for basic sales training; then advanced training follows. This IBM inservice, after basic training, adds up to 15 days of in-service activities a year for everyone, regardless of seniority.
Peters and Waterman go on to say that they don't have data that proves with finality our excellent companies are far above the norm in the amount of time they spend on training activities. On the other hand, they state that enough signs of training intensity exist to suggest it might indeed be the case.
This emphasis placed on in-service by successful businesses suggests they believe training makes a difference. It implies that Extension, which also must remain credible, has a real need for consistent intensive training. Is our training doing the job? Does it make a difference? Our belief was that in-service was making a difference, that there was a significant change in our agents after in-service activities.
To test this belief, we examined the cognitive change (knowledge acquisition) of agents attending an evaluation workshop compared to those who didn't attend. Our hypothesis was that agents who attended the workshop would show significantly greater knowledge of evaluation principles and theories than those not attending.
To clarify what is meant by in-service for Ohio Extension, the definition from the National Policy Guidelines for Staff Development was followed, with some modification:
- Education received in a structured setting that enables one to become more competent professionally, i.e., to further develop technical subject-matter competencies to keep abreast of and, if possible, ahead of change; to explore educational and technological content and processes in varying depths and to extend personal competencies . . . . 2
The Ohio Study
With our hypothesis and definition of in-service in mind, we studied the participants in an evaluation workshop held in December, 1982. The purpose of the workshop, aside from the purpose of the study, was to improve agents' skills in evaluating their Extension programs. To help them in the effort, such topics as the definition of evaluation (including current theories in evaluation), practical examples of evaluation, conducting evaluation, preparing instruments, data collection procedures, qualitative evaluations, and writing for professional journals were included.
|
A modified static-group comparison design was used. One group of staff was the experimental group, namely those who attended and who received the information given at the evaluation workshop. A second group was identified by a stratified random selection process from those agents who didn't attend the evaluation workshop. Stratified random selection was used to identify agents similar in assignments, program area, and tenure to those agents attending. Posttests were given to both groups.3
Admittedly, this design has weaknesses-the biggest being sampling error. Does the sample represent the characteristics of the population under study? Is there a difference between those who self-selected to attend and those who didn't attend? Random assignment and a pretest to both groups would have strengthened the inferences one can make. But we were unable to do this, given the freedom the agents had to attend the workshop.
The resource people from the evaluation workshop made up the questions for the posttests. Their questions tried to determine if a change in knowledge of evaluation principles and theories had taken place. The final instrument was checked by a group of colleagues in the Agriculture Education Department and Cooperative Extension Service for validity. Graduate students also took the test to help clarify the instrument.
Forty-one agents attended the evaluation workshop in December of 1982. Thirty-one (76%) completed the test. Thirty-five agents were selected as the control group from those who didn't attend the workshop. Seventy-seven percent of these agents, or 27 out of 35, returned usable instruments for analysis.
The Findings
Of the 31 responding Extension staff from the workshop group, the mean score for the posttest was 21.5 with a median of 22.5 out of a possible 32, or close to 70%. For the control group, the mean score on the posttest was 16.1 with a median of 17 out of a possible 32 or 53%. Using an independent t-test to compare groups for significant difference, we found that those agents who attended the evaluation workshop scored significantly higher on knowledge than did the control group.
As a follow-up to this activity, the correct answers with a cover letter of thanks and explanation were sent to both sets of agents (control and experimental groups). This provided a continuation of training and a pat on the back to those who had attended the workshop and some encouragement to those who didn't.
Conclusion
Our main reason for the study was to see if in-service training made a difference, especially in knowledge acquisition. Our conclusion was that in at least the cognitive (knowledge) area a significant change had taken place. Agents who attended the evaluation workshop had significantly greater knowledge in the area of evaluation than did agents who didn't attend. We inferred this, despite our lack of random assignment, from the experimental group, assuming agents were equal in December, 1982, in evaluation knowledge.
Another contributing factor, though, to this increase in knowledge could have been the teaching ability of the resource people and how interesting they made the workshop. Another possible influence is the fact that those agents who attended may have wanted the knowledge, although evaluation is generally not viewed as a high priority topic with agents.
Implications
The next step in this "in-service making a difference process" was to see how this knowledge changed agents' behavior in the county. Did agents use this evaluation knowledge? Did they try to incorporate these principles of evaluation in conducting their own studies?
Joyce has suggested in studies conducted on teachers that as little as five percent of the participants in a structured teacher in-service activity incorporates or transfers knowledge gained from an in-service workshop or activity to their repertoire.4 Even with proper feedback, only 50% will try it once. We believed Extension agents' batting average was higher than this. Our next step was to find out. In a telephone survey more than a year later, we discovered that over 90% of the agents who attended the evaluation workshop in December, 1982, were in process or had tried evaluating one of their major programs. Over 95% had used a questionnaire incorporating principles from the workshop.
Does in-service make a difference? We think it did.
Footnotes
- Thomas J. Peters and Robert H. Waterman, Jr., In Search of Excellence: Lessons from America's Best-Run Companies (New York: Warner Books, Inc., 1982).
- National Policy Guidelines for Staff Development (Durham: University of New Hampshire, Cooperative Extension Service, 1977).
- Walter R. Borg and Meredith Damien Gall, Educational Research: An Introduction, 3rd ed. (New York: Longman, Inc., 1979).
- Bruce Joyce, "Effective Staff Development" (Paper presented at the Effective Instruction and Effective Staff Development Conference, Washington, D.C., May 16, 1984).