April 1998 // Volume 36 // Number 2 // Research in Brief // 2RIB2

Previous Article Issue Contents Previous Article

Total Quality Management and Effective Extension Teaching

Abstract
Extension educators use program evaluation as a tool for measuring the effectiveness of teaching. Program evaluation consists of four steps: (a) designing an evaluation instrument to gather data, (b) gathering and analyzing the data, (c) comparing data with standards, and (d) making recommendations for improvement. Program evaluation can provide credible information for decision makers. However, little is said about the variations in the data or about teaching quality. By modifying traditional Extension education evaluation methods to include continuous quality improvement techniques, we can make our evaluation process more appropriate and effective. Statistical quality control charts provide a process for measuring variation, determining sources of variation, and making modifications to improve the quality of teaching. Evaluation data collected about Extension programs always exhibit variation. To improve program quality we must control and reduce the variance that exists among programs. Statistical control charts can be used to monitor these variations. Once measured, appropriate action can be taken to reduce variations and, thus, control and improve quality.


Introduction

Introduction

Quality improvement, in particular Total Quality Management (TQM), has had a major influence on U.S. management philosophy (Ross, 1993). While not a modern idea, quality improvement has gained interest in recent years. For most of the 20th century, productivity was the primary emphasis for most profit and non- profit organizations; the emphasis, however, is now changing to quality. The reason is simple; neither price, technology, nor quantity is sufficient to differentiate products or services from the competition. One thing that attracts and keeps customers is the "extra value" of quality--as defined by the customer (Cannie, 1991).

The College of Food, Agricultural, and Environmental Sciences at The Ohio State University has begun a Continuous Quality Improvement (CQI) process. One of the goals of the CQI project is to support and ensure quality teaching and excellence throughout the college, including Ohio State University Extension (OSU Extension). Extension program personnel at OSU use program evaluation as a tool for measuring the effectiveness of their teaching. Typically program evaluation consists of designing an instrument to gather data, gathering and analyzing the data, comparing the data with standards and benchmarks, and commending accomplishments and correcting shortcomings (Buford, Bedeian, & Lindner, 1995).

Benchmarking is establishing a point of reference from which comparison can be made. One type of benchmarking is to internally or administratively set standards. Another benchmarking process is to examine records from similar organizations and set competitive standards. Once a benchmark has been established, an organization can determine whether or not a program is meeting its objectives.

While several sources of information for performance evaluation exist within OSU Extension, clientele input is highly regarded as an important source of information. A series of valid and reliable evaluation instruments were designed by OSU Extension to gather evaluation data on effective teaching for Extension program personnel (Spiegel, 1992).

This set of materials is known as the Evaluation of Effective Extension Teaching (EEET). Group Form I measures teaching effectiveness. This evaluation instrument consists of nine statements with a five-point, Likert-type scale and a space for comments. The form uses the following scale: 1=Strongly Disagree, 2=Disagree, 3=Neither Disagree nor Agree, 4=Agree, and 5=Strongly Agree. The following statements are included in Group Form I: The Instructor ... (a) was well prepared, (b) was interested in helping me, (c) showed respect for all persons attending the workshop, (d) stimulated me in wanting to learn, (e) answered questions clearly, (f) related program content to real-life situations, (g) gave clear explanations, (h) held my attention, and i) presented information that will help me. Validity of the instrument was established by a panel of experts consisting of faculty members at The Ohio State University, Extension professionals, and Extension clientele. A reliability coefficient of .93 was determined for Group Form I (Spiegel, 1992).

By using Group Form I, Extension program personnel evaluations serve two purposes: (a) to provide information to the individual Extension teacher for the improvement of teaching and (b) to assist administrators in making decisions regarding promotion/tenure and annual performance appraisal. However, sources of variation in the evaluation data have not been examined. By using CQI techniques on Group Form I data, the sources of variation reflecting the quality of Extension programming can be determined and controlled.

Continuous quality improvement (CQI) and total quality management (TQM) have similar definitions and are used interchangeably in this paper. TQM is a process used to achieve "quality." Defining the TQM process is somewhat tenuous because of the various approaches developed by academicians and practitioners (Bedeian, 1993). Most processes developed to implement TQM include the following elements: (a) strategic planning, (b) leadership, (c) quality results, (d) information and analysis, (e) quality assurance, (f) human resource utilization, and (g) customer satisfaction (Joiner, 1992; Cohen & Brand, 1993; Schmidt & Finnigan, 1993; Bedeian, 1993).

While all elements in the TQM process are important and critical to successfully achieving quality, this paper will focus on the information and analysis element and, in particular, the use of statistical quality control charts to determine sources of variation affecting the quality of Extension programming.

Program evaluation in Extension is typically a "snapshot in time" of the effectiveness of a particular program or activity. Such information is useful not only to Extension professionals, but also to Extension administrators. But if TQM principles are to be applied, more information is needed. Statistical quality control is the application of statistical techniques for measuring and analyzing deviations in manufactured materials, parts, and products for the purpose of improving the quality of the process that created such deviations. The use of these statistical techniques was one of the key factors for Japan becoming the world leader in product quality (Buford, Bedeian, & Lindner, 1995). Statistical quality control techniques work in the service sector as well and they can work in Extension (Cohen & Brand, 1993; Bedeian, 1993).

An understanding of variation in evaluation scores is needed. Data collected on any activity, series of events, or situation will usually exhibit variation (Deming, 1960). In Extension, the primary goal is to educate clientele. One of many activities in this educational organization is providing useful programs to clientele. If ten workshops are conducted over a given year, undoubtedly some sort of program evaluation similar to the example above would be conducted.

If the average evaluation rating was a four or higher, Extension educators would have confidence that the educational program was conducted reasonably well. But since no educational program is conducted without variation, assuming that all clients provided a "good" evaluation is unrealistic. Tolerance or specification limits are set to allow variation above and below the average evaluation score. In fact, most of the evaluations will be slightly above or below the average score. Specification limits are upper and lower standards set by managers or clientele in response to clientele expectations. These specification limits represent quality standards that the organization and clientele consider acceptable.

OSU Extension uses Group - Form I as one tool to assess the quality of its educational programs. For example, assume that Extension administrators expect educational programs to receive at least a mean evaluation rating of a four. Evaluation of educational programs should yield data within the specified acceptable limits. If not, educational resources will be wasted and some customers will be dissatisfied. Even if evaluation scores are within acceptable limits, can educational programs be improved? In other words, can the variation of evaluation scores be further reduced?

Causes of variation are either common (natural or random) or special (assigned or operational). Statistical process control is a feedback system that determines whether variation in evaluation scores of the educational program is due to common or special causes. The only way to reduce common cause variation is to physically change the process of how the program is carried out (for example, teaching materials may need to be updated). Special causes of variation occur as a result of operating a system. These causes are usually related to the skills and motivation of the people operating the system or the procedures that are followed.

Statistical data can be used to distinguish between variation due to common causes and special causes. Until special causes of variation are eliminated, they will continue to have an unpredictable effect on the output and will make some customers dissatisfied. Any variation from the target output (a four on a five- point, Likert scale), even if it is within specifications limits, results in increased costs (Bedeian, 1993). Thus, to improve the quality of products or services, variation in evaluation scores must be controlled and reduced.

Statistical control charts are used to monitor the variation in evaluation scores and determine whether variation is due to common or special causes. Control limits are calculated using the evaluation data. An upper control limit (UCL) is typically three standard deviations above the evaluation mean score and the lower control limit (LCL) is typically three standard deviations below the evaluation mean score. Ideally, control limits should fall within the specification limits, creating a stable and in control process; in other words, evaluation ratings fall within clientele expectations. Upper and lower control limits can only be changed by modifying the process.

Methodology

The target population for the study consisted of 1,770 Ohio State University Extension programs that had been evaluated using the EEET: Group - Form I. These evaluations forms corresponded to programs that had been submitted for analysis from May 1991 through March 1996. A stratified random sample of 135 programs was drawn by length of employment. The distribution of cases by length of employment was as follows: (a) Extension personnel with less than two years = 45, (b) Extension personnel with two to six years = 45, and (c) Extension personnel with more than six years = 45.

The data were analyzed using the Minitab statistical software package (Minitab, 1995). Statistical quality control charts or Xbar charts were produced for analysis. The Xbar charts are calculated using a pooled standard deviation to estimate the standard deviation of the population. Unbiased point estimates of the population mean were also calculated from the individual observations being reported for analysis. Data were plotted to visually inspect the variation that the specified process produced. A visual inspection of the Xbar chart identifies the type of variation that is occurring.

Findings

The process mean for aggregated EEET scores was 4.6, the upper control limit (UCL) was 5.3, and the lower control limit (LCL) was 3.8. An analysis of the data indicates whether the variation is due to special or common causes. Special cause variation was identified in four programs. Therefore, the educational programming process is suspected to be unstable.

Special cause variation is usually not associated with the process itself. To accurately measure the variation associated with a given process, special cause variation must be examined and eliminated. The first step in dealing with special cause variation is identifying that the variation occurs. The next step is to determine why special cause variation occurs and to take action to contain the problem. To determine reasons of special cause variation, the individual(s) who conducted the program is notified and asked to provide detailed information on the program.

Perhaps the Extension educator responsible for the program was ill and a film was shown to program participants instead. Showing a film certainly is not part of a "typical" educational program. Developing a permanent procedure for dealing with special cause variation is critical. For example, a policy could be adopted that states "all programs will be canceled when program presenters are unable to attend due to illness." This procedure would certainly eliminate any special cause variation due to participants being shown a film instead of a face-to-face teaching. Once this special cause variation has been examined and controlled, the Extension educator can focus on other cases that exhibit special cause variation. Once all special cause variation has been identified and resolved (ensuring our process is in control), the control limits are compared with the specification limits to determine if the process of carrying out the educational program is capable of producing evaluation data points within our clienteles' specifications.

Four programs fell below the lower specification limits (LSL = 4.0). Therefore, the process of implementing the educational program is unable to produce data points within the stated specification limits. One way to bring the data into the specification limits is to lower the lower specification limit from 4 to 3.8. Changing the process in which the educational program is designed may help bring the data within the specification limits; this strategy requires developing new processes or changing the way educational programs are delivered. However, before the current programming process is changed or eliminated another alternative should be considered to eliminate common cause variation. A good first step to take before changing the program is to stratify the data. In our example, data can be stratified by three classifications: Extension appointment, main program area, and length of employment.

By stratifying the data, patterns affecting the variation in evaluation scores may be observed. For example, aggregated data have been stratified by length of employment with OSU Extension. Once the data have been stratified, special cause variation that was initially reported as common cause variation may be revealed. An analysis of the stratified EEET scores by Extension personnel with two to six years, and over six years indicates special cause variation has occurred. On the other hand none of the programs for stratified EEET scores by Extension personnel with less than two years exhibit special cause variation.

Thus, two unstable processes are discovered that were earlier determined to have a stable process. As length of employment increases, variation on evaluation scores increases. On the other hand, as length of employment increases, fewer programs are below the lower specification limit. With the exception of one case, the process for Extension educators with more than two years of service is capable of producing the data within the specification limits. Having determined why special cause variation occurred and taking action to contain the problem, Extension educators can once again turn attention to the process of planning and implementing educational programs.

Dividing the process into component pieces is also possible. For example, stratified EEET scores by personnel with over six years can be divided into the nine evaluation criteria: well prepared, interested in helping, showed respect, stimulated learning, answered questions clearly, related content to life, clear explanations, held my attention, and helpful information. By dividing the data, special cause variation is present in a process thought to be stable and in control.

The data for well prepared, interested in helping, showed respect, stimulated learning, answered questions clearly, related content to life, clear explanations, held my attention, helpful information exhibits special cause variation. The next step is determine why this special cause variation occurred and correct the problem.

In the above cases, if all the special cause variation can be eliminated and the process is determined to be in control and stable, would the programming process be capable of producing data within our specification limits? Analysis of the nine criteria reveals that if all special cause variation is eliminated, the programming process is capable of producing data within the specification limits.

Once the process is determined to be capable of producing data within our specification limits for Extension personnel with over six years of experience, Extension educators can continue to investigate other divisions of the data. For Extension personnel with less than two years of experience, the process is not capable of producing data within our specification limits for the nine criteria. However, for Extension personnel with two to six years of experience, the process is not capable of producing data within our specification limits for four of the criteria: stimulated learning, related content to life, held my attention, and helpful information.

Discussion

Length of employment affects quality in OSU Extension educational programming. Extension educators with more than six years in the organization tend to exhibit more variation in the evaluation data when compared to newer employees. On the other hand, Extension educators with more than six years in the organization tend to receive scores within our specification limits when compared to newer employees. OSU Extension could provide more training or in-service opportunities to newer Extension educators so they can be better prepared to design and implement educational programs geared toward customer expectations. In addition, separate specification limits need to be considered according to length of employment. In other words, evaluations from Extension educators with less than two years with the organization should be compared against lower specification limits.

Once special cause variation has been identified through the specific tests, statistical quality control charts do not provide guidelines on how to control causes of special cause variation. Extension educators are in need of using qualitative data gathering techniques to complement findings from statistical quality control charts. For instance, the use of face-to-face interviews with program participants, observers, and presenters will provide valuable information to determine reasons of special cause variation. Similarly, when common cause variation has been determined, the alternative of desegregating the data also requires analytical consideration.

Tests for determining special cause variation were established using profiles from profit organizations, particularly with data from the industrial sector. Factors affecting the quality of products manufactured in the industrial sector are more easily identified and controlled when compared to other sectors. Identifying and controlling factors affecting the quality of educational programs are more challenging. Developing and validating a more appropriate set of tests for determining sources of special cause variation in non-profit organizations, like OSU Extension, is highly suggested.

Quality is the foundation for TQM. If a non-profit organization , such as OSU Extension, values customer satisfaction, then quality educational programming must be designed and delivered. Statistical control charts are one method of identifying elements that affect the quality of services. The striving for quality programming will be a continuous process for organizations like OSU Extension, where customer satisfaction is paramount.

References

Bedeian, A. G. (1993). Management (3rd ed.). Fort Worth: Dryden.

Buford, J. A., Jr., Bedeian, A. G., & Lindner, J. R. (1995). Management in Extension (3rd ed.) Columbus, Ohio: Ohio State University Extension.

Cannie, J. K. (1991). Keeping customers for life. New York: American Management Association.

Cohen, S., & Brand, R. (1993). Total quality management in government. San Francisco: Jossey-Bass.

Deming, W. E. (1960). Sample design and business research. New York: Wiley.

Joiner, B. L. (1992). Fundamentals of fourth generation management. Madison, Wisconsin: Joiner Associates, Inc.

Minitab. (1995). Minitab reference manual. (Release 10Xra Version). State College, PA.

Ross, J. E. (1993). Total quality management: Text, cases and readings. St. Lucie Press.

Spiegel, M. (1992). Synthesizing evaluation: Perspectives, practices, and evidence. Proceedings of the American Evaluation Association, 92.

Schmidt, W. H., & Finnigan, J. P. (1993). TQManager: A practical guide for managing in a total quality organization. San Francisco: Jossey-Bass.