February 1997 // Volume 35 // Number 1 // Research in Brief // 1RIB1
Instrument Development for Low Literacy Audiences: Assessing Extension Program Personnel Teaching Effectiveness
Abstract
Performance evaluation plays an important role in providing feedback for self improvement and in assisting administrators with personnel decisions. The main purpose of the study was to develop an appropriate evaluation instrument to be used by low literacy audiences for the assessment of Extension program personnel teaching effectiveness. After content and face validity checks, a nine-item instrument using a pictorial scale and an open-ended statement were selected for the instrument. Items selected reflected six dimensions of teaching: learning, enthusiasm, organization, group interaction, individual rapport, and overall. The low literacy instrument confirms Extension administration's views of serving a wide variety of audiences.
Personnel evaluations serve two main purposes: to provide information for the improvement of performance and to assist administrators in making decisions regarding promotion/tenure and annual performance appraisals (Dick, 1981). The issue of who should provide information regarding personnel performance needs to be addressed. No one source of information for performance evaluation has been found to be the most effective (Fisher, Schoenfeldt, & Shaw, 1990). To achieve objectivity and fairness in an evaluation system for Extension program personnel, administrators must examine data from multiple sources. Sources of valid data could include: clientele, peers, experts, supervisors, as well as the individual instructor.
In 1989, Ohio State University Extension (OSU Extension) established an evaluation system for program personnel to use for assessing teaching effectiveness (Spiegel, 1992). The OSU Extension's evaluation system, known as Evaluation of Effective Extension Teaching (EEET), gathers information from clientele, peers, experts, and supervisors. EEET data can be used for self improvement, performance appraisal, and/or promotion and tenure considerations (Nieto & Berry, 1996).
Four distinct instruments were developed for the EEET: (a) Group Form (Form I and Form II) -- to be used by Extension clientele in group teaching situations; Form I measures teaching only, while Form II measures educational materials and content in addition to instruction; (b) Individual Form -- to be used by Extension clientele in evaluating one-on-one teaching/consulting settings; (c) Expert Form -- to be used by subject matter experts in evaluating lesson plans and/or educational materials; and (d) Peers Form -- to be used by Extension colleagues and/or supervisors for observation of group teaching situations (Nieto & Berry, 1996).
Group Forms, particularly Form I, have been used extensively by Extension program personnel during the last four years. A database was created to provide comparison data to Extension personnel submitting Group Forms (Form I) for analysis. Scores are compared by type of appointment (i.e., district specialist, Extension agent, Extension associate, EFNEP nutrition educator, program assistant, and state specialist), program area (i.e., agriculture and natural resources, community development, family and consumer sciences, and 4-H youth development), and length of employment (i.e., less than two years, two - six years, and more than six years).
Appropriate procedures were followed to address validity and reliability concerns for Group Form I. A reliability coefficient of .93 (Cronbach's alpha) was determined for the summated, Likert -scale instrument (a nine-item instrument); in addition, the readability index for the instrument was at the 7th grade level (Spiegel, 1992).
Extension program personnel across the state indicated the need for an additional group form designed for clienteles with low literacy levels (e.g., youth, low-income individuals, senior citizens, immigrants). The Expanded Food and Nutrition Education Program (EFNEP) is one example of a program that deals with this type of adience. During the 1995-96 program year, EFNEP educators taught 9,157 eligible families across Ohio. In addition, the EFNEP youth program reached 28,636 youth last year with 75% in the 8-12 year age range (Coplin, 1997).
A lack of evaluation instruments to be used by low literacy audiences in assessing teaching effectiveness was determined after conducting an extensive literature review. The 1992 National Adult Literacy Survey revealed that approximately 90 million U.S. adults exhibit low levels of literacy (Kirsch, Jungeblut, Jenkins, & Kolstad, 1993).
The main purpose of this research study was to develop an appropriate evaluation instrument to be used by low literacy audiences for the assessment of Extension program personnel teaching effectiveness. Specific objectives of the study were to: (a) describe the evaluation instruments and constructs used to write the low literacy instrument, (b) outline the procedures followed for establishing content and face validity of the low literacy instrument, and (c) report the reliability coefficients calculated for the instrument.
Four instruments were used to create an item pool for the development of the low literacy instrument: (a) the Ohio State University Extension EEET- Group Form I, (b) Ohio State University Student Evaluation of Instruction (SEI) (Gunter, 1996, 1986), (c) Students' Evaluation of Educational Quality (SEEQ) (Marsh, 1987, 1982), and (d) Arizona Western College Student Appraisal of Instruction (SAI) (Olp, Watson & Valek, 1991).
Table 1 displays the number of items each source contributed to the six teaching effectiveness dimensions in the low literacy instrument. After removing duplicate statements, the first draft of the instrument consisted of 50 items.
Table 1 Teaching Effectiveness Dimensions by Evaluation Instruments | |||||
---|---|---|---|---|---|
Evaluation Instruments | |||||
Dimensions | EEET | SEI | SEEQ | SAI | Total |
Learning | 3 | 4 | 4 | 6 | 17 |
Enthusiasm | 1 | 1 | 4 | 1 | 7 |
Organization | 3 | 3 | 2 | 5 | 13 |
Group Interaction | 0 | 0 | 3 | 1 | 4 |
Individual Rapport | 2 | 1 | 3 | 2 | 8 |
Overall | 0 | 1 | 2 | 2 | 5 |
Total | 9 | 10 | 18 | 17 | 54 |
Note: EEET = Group Form I, SEI = Ohio State University Student Evaluation of Instruction, SEEQ = Students' Evaluation of Educational Quality, and SAI = Arizona Western College Student Appraisal of Instruction. |
The items in the pool were revised into more simple language for use with low literacy audiences. The revision process was completed with the help of an Ohio State University literacy program instructor. Fourteen Extension program personnel (county, district, and state levels) working with low literacy clienteles rated the readability of the revised statements. These individuals also commented on the ability of low literacy audiences to use a Likert-type scale. Revisions to statements and to the format were based on the input of the Extension program personnel. Several agents suggested a format using a pictorial scale (smiling and frowning faces) to aid participants in completing the instrument.
The instrument was reduced to 21 statements, with seven statements on learning, three statements on enthusiasm, three statements on organization, two statements on group interaction, four statements on individual rapport, and two statements on overall teaching. The Flesch-Kincaid readability index rated the 21 statements at a fourth grade level ("Grammatik 6.0a," 1994).
A panel of experts was asked to indicate the combination of 21 statements that would best measure teaching effectiveness. They also commented on the format, scale, and wording of the statements and instructions. The panel included an Extension associate director, two agriculture education professors with emphasis in teaching, an agriculture education professor with emphasis in evaluation, two Extension district directors, an Extension agent, and a literacy program instructor. Based on the panel's input, nine statements were selected for the instrument, with an optional open-ended statement.
To establish face validity, four Extension agents conducted a field test of the nine-item instrument, with an open-ended statement. The agents were asked to use the instrument with a low literacy audience (n = 50) and obtain comments from the clientele regarding the clarity of the instructions, usefulness of the example provided, user friendliness of the scale, wording and clarity of the statements, and the clienteles' willingness to respond to the open-ended question.
Field test responses to the instrument were very positive. Some agents commented that the open-ended question was intimidating and many clients did not feel comfortable writing. One group told the agent that while many program participants did not want to write comments, the question should remain as part of the instrument for those people who want to write comments. Minor changes to the format and title of the instrument were made based on agent and clientele suggestions.
Group Form III was suggested to be the name for the new evaluation form to continue the sequence for group forms in the EEET packet. The statements reflected the six dimensions of teaching effectiveness: (a) learning (i.e., I learned a lot from this teacher, I learned something I can use, and the teacher made learning fun), (b) enthusiasm (i.e., the teacher held my interest), (c) organization(i.e., the teacher clearly answered questions and the teacher was easy to understand), (d) group interaction (i.e., I was asked to share my ideas), (e) individual rapport (i.e., the teacher made me feel welcome), and f) overall (i.e., I would take another class from this teacher). The Flesh- Kincaid readability index rated the nine-item instrument at a fifth grade level ("Grammatik 6.0a," 1994). Other readability statistics included: sentence complexity: 5 (100 = very complex) and vocabulary complexity: 15 (100 = very complex).
Test-retest and internal consistency coefficients were calculated to determine the reliability of Group Form III. A pilot test was conducted with low literacy Extension clients. A group of five Extension program personnel across the state, working with a low litercay clientele, was asked to administer Group Form III after completing an educational program. Four of these agents had already participated in the field test of the instrument. Clients were asked to provide the last four digits of their social security number for test/retest (two administrations of the same instrument) purposes. One week later, the same group of clients was asked to complete Group Form III again; paired data were needed to calculate reliability coefficients (percents of agreement). Paired data were collected from a total of 21 individuals. Percents of agreement between the first and second administration of Group Form III ranged from 81% to 95%, with an overall average of 91%. Table 2 displays percents of agreement for the nine statements of the instrument.
Table 2 Percents of Agreement Between First and Second Administration of Group Form III | |
---|---|
Statement | Percent of Agreement |
The teacher made learning fun. | 95% |
The teacher clearly answered questions. | 95% |
The teacher was easy to understand. | 95% |
The teacher made me feel welcome. | 95% |
I would take another class from this teacher. | 95% |
The teacher held my interest. | 90% |
I was asked to share my ideas. | 90% |
I learned a lot from this teacher. | 81% |
I learned something I can use. | 81% |
Since Group Form III is a summated scale, that is, the nine statements added together represent teaching effectiveness, then a measure of internal consistency as an additional reliability check was required. Cronbach's alpha is a reliability test used to determine the internal consistency of a non-dichotomous summated scale. Data from the first administration of Group Form III in the pilot test were used to run internal consistency. Twenty-seven cases were used for the internal consistency test; a Cronbach's alpha of .74 was calculated for Group Form III. As suggested by Nunnally (1967), reliability coefficients between .5 and .6 are adequate in the early stages of research.
Data have been collected from mid-February to mid-July 1996 from OSU Extension program personnel using Group Form III. Based upon 1,106 evaluations completed, the majority of those using Group Form III are program assistants (45%), followed by administrative and professional agents (43%). The program are submitting the highest number of evaluations was family and consumer sciences (58%), followed by 4-H youth development (38%).
Performance evaluation plays an important role in providing feedback for Extension program personnel improvement and in assisting administrators with personnel decisions. The EEET packet lacked an evaluation instrument that could be used with low literacy audiences. Due to the lack of an appropriate evaluation instrument, Extension program personnel either were not collecting evaluation data or gathering inaccurate information in group teaching situations with low literacy audiences. Teaching performance of Extension program personnel needed to be accurately assessed. The availability of an evaluation instrument to be used by low literacy clientele for the assessment of teaching effectiveness provides OSU Extension system the opportunity to gather valid and reliable evaluation data.
Extension program personnel are constantly reminded and encouraged to reach unrepresented groups in their programming efforts. The development of Group Form III confirmed Extension administration's views about the importance of serving a wide variety of audiences with its educational programs. However, in addition to an evaluation instrument for low literacy audiences, Extension needs to address the needs of other unrepresented groups such as the Amish, non-English speaking clientele, and various ethnic groups. Extension administration needs to design into the reward system the recognition of individuals documenting their teaching effectiveness with targeted groups.
Coplin, S. (1997). Ohio 1996 Annual Report: Expanded Food and Nutrition Education Program. Columbus: Ohio State University Extension.
Dick, R.C. (1981). Chairperson's perspective on faculty evaluation. Indianapolis, IN: Meeting of the Indiana State Speech Association (ERIC Document Reproduction Service No. ED 210 714).
Fisher, C., Schoenfeldt, L., and Shaw, J. (1990). Human resource management. Boston, MA: Houghton Mifflin Company.
Grammatik 6.0a [Computer software]. (1994). WordPerfect Grammar Checker. Novell, Inc.
Gunther, R. (1996). Report to the Council on Academic Affairs on the student evaluation of instruction. Columbus: The Ohio State University.
Gunther, R. (1986). Final Report of the Student Evaluation of Instruction Committee. Columbus: The Ohio State University.
Kirsch, I., Jungeblut, A., Jenkins, L., and Kolstad, A. (1993). Adult literacy in America: A first look at the results of the National Adult Literacy Survey. National Center for Education Statistics. Department of Education. Washington, D.C.
Marsh, H. W. (1987). Student evaluations of teaching. In M. J. Dunkin (Ed.), The International Encyclopedia of Teaching and Teacher Education. New York: Pergamon Press.
Marsh, H. W. (1982). SEEQ: a reliable, valid, and useful instrument for collecting students' evaluations of university teaching. British Journal of Educational Psychology, 52, 77-95.
Nieto, R.D. and Berry, A. (1996). Evaluation of effective extension teaching (EEET). Evaluation Packet. Columbus: Ohio State University Extension.
Nunnally, J.C. (1967). Psychometric theory. New York: McGraw-Hill.
Olp, M., Watson, K., and Valek M. (1991). Appraisal of faculty: encouragement and improvement in the classroom. Yuma, AZ: Arizona Western College. (ERIC Document Reproduction Service No. ED 336 159).
Spiegel, M. (1992). Synthesizing evaluation: Perspectives, practices and evidence. Proceedings of the American Evaluation Association 92: Extension Evaluation Education Topical Interest Group, Seattle, WA, 27-37.