January 1983 // Volume 21 // Number 1 // Feature Articles

Previous Article Issue Contents Previous Article

In-Service Training: Does It Make A Difference?

Abstract
Criteria are dentified that allow us to assess the value of inservice programs for Extension and or paiticipants-both staff and volunteers.


M. F. Smith
Program Evaluation Specialist
Cooperative Extension Service
University of Florida - GainesviIle

John T. Woeste
Dean for Extension
University of Florida - Gainesville


In Extension, we spend a lot of resources on training agents and the volunteers who help them. And, just as evaluation is important for knowing the effectiveness of programs. evaluation is also important for knowing the effectiveness of training efforts.

The most common evaluation of an in-service educational program is participant reaction (Did they enjoy the program?). However, that alone provides no indication of the real value of the training to the participant or to the organization and provides no data for improving the program for future repetitions. Recent writers on the subject of evaluation of in-service training have suggested that more effort be focused on determining organizational results directly attributable to the training1. And, we would agree, if the purpose of the evaluation is organizational accountability. However, different evaluation purposes require different criteria for evaluation and different times for collecting the data.

This article presents criteria to evaluate in-service educational programs before. during. and after implementation and keys these criteria to the different purposes for evaluation.

Criteria

The criteria presented are generalized to be relevant to more in-service educational situations. Some programs may have other specific benefits or causal relationships that would also be considered in their evaluations. People planning evaluations, if they're different from program implementers, should be certain to check out any specific evaluation needs with program implementers - for example, program specialists-and , with people wanting the-, sessions to be held-for example, agents and county and state program leaders.

Stage I: Pre-Implementation

Too little time it, usually spent on evaluation of in-service educational Programs before their implementation. Perfectly good programs may have little or no positive impact because they weren't on target they weren't what was needed to solve a problem. Similarly, on-track but poorly designed programs may have no positive impact and may cause participants (agents. specialists. or volunteers) to develop a negative attitude to training programs in general and to the Extension organization specifically, Relevant criteria at this stage are:

  1. Worthliness of the objectives: Do objectives support Extension mandates and goals? Even if the objectives were accomplished,what real difference would it make to the state or county organization and/or to participants? Do objectives address an identified problem/need?
  2. Appropriateness of program to situation: Is the performance problem addressed one that's best solved by an educational program? Would some other course of action,for example, changing a procedure, purchasing equipment, or adding support staff, be more likely to produce the desired results quicker or at less expense?
  3. Appropriateness of course content/activitles to objectives: Are experiences appropriate for the expected later behavior of participants (for example. lectures where expectation is awareness ;problem solving and case analysis where expectation is application2)? Are all objectives covered by the content? Are sufficient materials and practice planned for each objective to be met?
  4. Appropriateness of preprogram publicity: Does the title clearly identify the content? Is there a clear statement of the objectives and the depth and breadth of content to be covered? Is the target audience described? Are prerequisites listed?
  5. Qualifications of staff: Do staff credentials provide evidence of competency in the content area of the in-service program and/or in the process of teaching?
  6. Efficjency of resources planned: Are resources available to cover costs of the program? Are all planned resources necessary to the success of the program? Could changes be made to use resources more efficiently without loss of program quality?
  7. Comprehensiveness of evaluation plans: Are objectives written in measurable terms? Are steps delineated to gather data at appropriate times to measure objectives?

Stage II: During Implementation

Gathering information about an in-service educational program while it's ongoing is necessary if causes for success (or lack of same) are to be knownand if improvements in the program are to be made. Some of the important criteria here are:

  1. Appropriateness of participants: Are participants members of the intended audience those most in need (for example, if you hold a training session for agents on questionnaire design. is it only those agents who are already doing a pretty good job on their surveys who attend the session)? Do participants have prerequisites? Do participants know what to expect from the training and what's expected of them?
  2. Fit of actual and planned activities: Do instructional and evaluation activities occur as planned? Are planned materials used? Do planned resource people participate?
  3. Appropriateness of facilities: Were facilities opened and ready on arrival of participants? Was the physical situation conducive to learning ... proper ventilation, acoustics, proper ventilation, acoustics, temperature?
  4. Effectiveness of instructors: Do they (a) hold the interest of agents/specialists, (b) adequately cover subject, and (c) help the participants apply the material to real-job situations?

Stage III: Post-Implementation

Gathering information about a program after it has been completed is the most-often used approach in evaluation, and is necessary to judge ultimate value or results.


Criteria have been provided for evaluating in-service educational programs before, during, and after implementation. The usefulness of these criteria was keyed to specific evaluation purposes. These have been offered to make the evaluation process more systematic, easier to plan, and more effective in promoting planned change through Extension.


However, with in-service educational programs, a word of caution must be issued. Once an individual gets back to the regular work situation, any attempt to measure the impact of the training in terms of application of concepts will need to take into consideration the relative impact of competing and complementary forces that potentially influence the practice under consideration, and the individualness of outcome from training3. Some of the criteria useful at this stage are:

  1. Level of participant enjoyment: How well did the participants like the program? How interested and enthusiastic were they about the training?
  2. Increase in learning: What principles and facts were understood and absorbed? What attitudes were affected? In what skills did participants become proficient?
  3. Behavioral changes: What principles have been put into practice on the job?
  4. Organizational benefits: Is the problem that precipitated the program still present after the training has been done? (Here we are usually looking for payoffs such as increase in county support for programs. improved morale of staff, reduction of costs. more and better-targeted programs, etc.).

Criteria Keyed to Evaluation Purpose

No evaluation would focus on all the identified criteria at any one time. lt's a waste of resources to gather more data than required to answer specific questions for specific courses of action. Some of the more common purposes of evaluation are Improvement of training, organizational accountability, impact assessment, and cost/benefit analysis4. Table I shows the specific criteria most appropriate for these four evaluation purposes.

Table 1.
Criteria Keyed to Evaluation Purpose

Evaluation Purpose
Criteria
 
Stage
No.
 
Improvement of training programme
I
3
Content/Activities
 
I
4
Preprogram publicity
 
I
5
Staff competency
 
I
7
Evaluation
 
II
1
Participants
 
II
2
Implementation as planned
 
II
3
Facilities
 
II
4
Instructors
 
III
1
Participant satisfaction
 
III
2
Participant learning
Organizational accountability
I
1
Objectives
 
I
2
Training as problem solution
 
I
6
Resources
 
II
1
Participants
 
III
3
Participant practice change
 
III
4
Organizational benefits
 
III
3
Participant practice change
 
III
4
Organizational benefits
 
I
6
Resources
 
III
3
Participant practice change
 
III
4
Organizational benefits

Summary

Criteria have been provided for evaluating in-service educational programs before. during, and after implementation. The usefulness of these criteria was keyed to specific evaluation purposes. These have been offered to make the evaluation process more systematic, easier to plan, and more effective in promoting planned change through Extension.

Footnotes

  1. M. G. Brown, "Evaluating Training Via Multiple Baseline Designs," Training and Development Journal, XXXIV (No. 10. 1980), 11-16 and M. E. Smith, "Evaluating Training Operations and Programs," Training and Development Journal, XXXIV (No. 10. 1980). 70-78.
  2. P. A. McLagan, Helping Others Learn (Redding, Pennsylvania: Addison-Wesley, 1978).
  3. J. E. Dopyera and M. Lay-Dopyera, "Effective Evaluation: Is There Something More?" Training and Development Journal, XXXIV (No. 11. 1980). 67, 68. 70.
  4. S. B. Anderson and S. Ball. The Profession and Practice of Program Evaluation (San Francisco: Jossey-Bass. 1978).