October 2013
|
October 2013 // Volume 51 // Number 5 // Research In Brief // v51-5rb3
Use of Pictorial Evaluations to Measure Knowledge Gained by Hispanic Landscape Workers Receiving Safety Training
Abstract
Landscape work is dangerous. In the Southeast, Hispanic workers predominate in landscape industries. The incidence of functional illiteracy in this group of workers is high. A pictorial knowledge-based evaluation instrument was developed to measure the effectiveness of the trainings. No reading skills were required to take the evaluation. The evaluation instrument was not sensitive enough to measure knowledge gain as a result of the training quantitatively but has application as a strong review and discussion tool and could be used for group evaluations to collect qualitative data.
Introduction
In the Southeast, Hispanic workers predominate in landscape and related industries. According to the 2010 Census, Hispanics (or Latinos) comprised 16% of the United States population (Ennis, Rios-Vargas, & Albert, 2010), with most (54%) indicating they were of Mexican decent. However the group is diverse, and Hispanics hail from many countries, including Puerto Rico, Cuba, and countries in Central and South America.
Landscape work is dangerous. There were 130 fatalities in the industry in 2010 (Bureau of Labor Statistics, 2010), and a quick Web search reveals a continuous stream of accidents and injuries occurring throughout the United States. These accidents and injuries affect both workers and business owners alike (Bauske, Martinez-Espinoza, Maqueda, & Chance, 2008).
University of Georgia Cooperative Extension and the Georgia Center for Urban Agriculture have delivered safety training in Spanish to industry workers since 2005. The impact of the training on safety knowledge is evaluated by written pre-training and post-training evaluations.
Although there is no internationally accepted standard definition of literacy, literacy levels in Mexico are relatively high at 86%, and rates range from 67% to 95% in Central America (Central Intelligence Agency, 2012). Based on anecdotal and survey evidence, we suspect the incidence of functional illiteracy in our safety trainings is considerably higher than these statistics would suggest. Grammatical and spelling errors were common on training evaluation forms, and the classroom skills of some participants were rudimentary. When 249 participants in eight trainings were asked about their education level, 19% (n = 48) reported they had "no formal education," and 28% (n = 70) reported they had "some primary school." The remaining 53% (n = 131) reported attending "some" secondary school or college.
Literacy challenges can be overcome in trainings by using books rich in pictures, interactive slide sets, demonstrations with equipment, flash cards, and educational videos. Researchers have demonstrated that engaging, hands-on trainings with field equipment, flash cards, and demonstrations increased knowledge acquisition (Burke et al. 2006).
Literacy challenges are not easily overcome when written pre- and post-training evaluations are used to assess training effectiveness. In the participant group described above, workers with no formal education (19%) and those who attended some primary school (28%) did not demonstrate mastery of the material on written evaluations at the desired 70% level (Figure 1). Arcury-Quandt, Gentry, and Marin (2011) reported that golf course superintendents have also encountered this challenge when they needed evidence of training effectiveness. Although pictures have been used as educational tools to address literacy barriers, little is known about the effectiveness of pictures when used to evaluate learning with Hispanic adults.
Figure 1.
Effect of Education Level on Pre- and Post-Safety Training Evaluation Scores
Several researchers have explored training options for low literacy learners. Some have suggested online or computer-based trainings and associated evaluations (Bedwell & Salas, 2010; Evia, 2011). This format allows for picture-rich presentations. Anger et al. (2006) suggested presenting quiz questions as pictures and asking, "Is the person in the picture working safely?" A smiling face icon was used to indicate a correct response.
Computer (or online) trainings and evaluations present challenges for landscape companies. The ratio of computers to workers at most worksites is low, and workers may have very limited comfort or experience with computers.
Houts (2006) outlined best practices for the use of pictures. Some of the suggestions included determining how pictures could be used to support key points, minimizing distracting details in the pictures, using simple language in conjunction with pictures, including people from the intended audience in the pictures, and having trainers (not artists) design the materials. Kawakami, Kogi, Toyama, and Yoshikawa (2004) stressed the importance of using positive examples in training materials with pictures, and Tapp (2008) suggested using a "Hidden Pictures" game in which trainees identify hazards in a photo or drawing. The drawing or photo should have at least six hazards and not be obvious. Tapp (2008) also suggested a PPE matchup game in which workers match the appropriate PPE to the task.
Can pictures be used as effective evaluation tools? Standard, non-pictorial evaluations designed to measure knowledge-gained in trainings tend to be relatively short (10-20 questions) due to work/time constraints. They must effectively highlight and evaluate knowledge of key concepts and be unambiguous without being overly simplistic. The purpose of the study reported here was to develop and evaluate an assessment instrument consisting of pictures that could be used to determine knowledge gained without requiring reading skills.
Materials and Methods
Three types of pictorial evaluations were developed (a general evaluation, PPE matchup, and work site hazards) and were included in the evaluation instrument. All pictures were professionally drawn under the guidance of the authors and depicted situations and procedures highlighted in the safety training. The general evaluation consisted of 28 line drawn pictures of people engaged in work activities and situations. The workers were asked to circle a thumbs-up or thumbs-down symbol on the evaluation form to indicate if the activity or situation was safe (Figure 2a) or unsafe (Figure 2b). Three PPE matchup pictures were developed (Figure 3). Each picture depicted an activity and environmental conditions. Workers were given an answer sheet with pictures of PPE (long and short sleeve shirts, boots, hats, sun screen, gloves, ear protection, etc.) and asked to circle the appropriate PPE for the situation pictured. Finally, workers were given a drawing of a work site (Figure 4) and asked to circle the potential hazards in the picture.
Figure 2a.
Example of a Safe General Evaluation Picture
Figure 2b.
Example of an Unsafe General Evaluation Picture
Figure 3.
Example of a PPE Matchup Picture
Figure 4.
Work Site Hazard Picture
All trainings were conducted in Spanish. One instructor presented all eight trainings and evaluations. The safety training consisted of three parts (General Safety, Equipment Safety, and Pesticide Safety). A standard, PowerPoint presentation was used at all trainings. The presentation was accompanied by hands-on demonstrations of procedures and personal protective equipment (PPE).
All three parts of the evaluation were presented in a PowerPoint presentation. After the initial instructions, the instructor made no comments as the pictures were presented, and participants indicated their responses on an answer sheet.
Eight landscape maintenance companies in metropolitan areas of Georgia were chosen to participate in the study. A nonequivalent group comparison design was used to compare scores on the pictorial evaluation. Workers in four companies received the pre-training evaluation, and workers in four other companies received the post-training evaluation. The groups receiving the evaluation prior to the training served as the comparison group (Table 1). The companies selected were similar in size and preformed similar work activities, and the instructor noted no obvious demographic differences among the workers.
Pre-training | Post-training | ||
Company Site | Number of Workers | Company Site | Number of Workers |
A | 12 | E | 17 |
B | 16 | F | 32 |
C | 10 | G | 12 |
D | 39 | H | 21 |
Total | 77 | Total | 82 |
Responses were scored, and independent samples t-tests were used to detect potential differences in pre- and post-test mean scores for each of the three pictorial evaluations and for differences between training locations (alpha set a priori at 0.05 for all tests of significance).
Results and Discussion
The Hispanic landscape workers were attentive and engaged during the evaluation process. Verbal feedback was positive. Workers found the process interesting, and there was considerable discussion among the workers, and with the trainer, after evaluation. No statistically significant differences in scores were found among training locations within pre- and post-training evaluation groups (data not shown), suggesting consistency in delivery across locations and demographic consistency among the groups.
With the possible exception of the work site hazard component, the instrument does not appear to effectively measure knowledge-gain resulting from the training (Table 2). No statistically significant differences were found between pre- and post-training scores for each of the three sections of the pictorial evaluation or the total combined test score.
Evaluation Component (Maximum Score) | Pre-Training Mean Score (SD) | Post-Training Mean Score (SD) |
General Evaluation (28) | 20.5 (3.57) | 20.2 (4.22) |
PPE Matchup (24) | 18.4 (2.49) | 17.8 (3.13) |
Work Site Hazards (9) | 5.4 (1.78) | 6.4 (1.47) |
Total Score (61) | 44.6 (5.32) | 44.5 (6.40) |
The pictorial evaluation did not detect that the safety trainings significantly improved safety knowledge. In theory, three factors may explain the lack of improvement in safety knowledge: irrelevant training content, a weak instructor, or an insensitive evaluation instrument (Rossi, Lipsey, & Freeman, 2004). In our experience, workers are keenly interested in safety, as are their employers; the content is interesting and relevant to them. The instructor received excellent evaluations from participants and employers on his classroom and teaching skills. The lack of improvement, in this case, appears to be the result of the evaluation instrument itself. The instrument does not appear to be sensitive enough to detect knowledge gain and should be reexamined for clarity of illustration and specificity of content.
The workers consistently missed several of the pictures. In some cases the confounding issues were easily identified. In Figure 5a, though the picture illustrates safe practices and appropriate PPE, many participants thought the tree branch would fall on the worker's head and was therefore labeled unsafe. In other cases, the source of confusion was not clear (Figure 5b). The need to use the proper tool for the job and the need to use a saw instead of loppers on large, thick branches was discussed at length in the training. Still, most participants incorrectly indicated this was a safe practice.
Figure 5a.
Consistently Missed Safe Practice Picture
Figure 5b.
Consistently Missed Unsafe Practice Picture
Some workers simply could not see the pictures. Questions such as, "Is that guy scratching or talking on the cell phone?" were not uncommon. While almost two thirds of Americans wear prescription glasses (Lighthouse International, 2012), few of the Hispanic workers wore glasses in our safety training classes.
An evaluator script to accompany the slide presentation may clarify the illustrations. Medhi, Prasad, and Toyama (2007) found that drawings, photos, and video with audio were much better understood than representation types without audio and confirmed that voice annotation was of clear value.
The evaluation process did not eliminate the need for classroom skills that generally accompany literacy, and some workers struggled with this. Despite attempts to minimize the feel of a classroom test, participants still needed to use pens and paper, follow the presentation and numbered answer sheet, and make keen observations in an unfamiliar "test" environment. Like many classroom activities, the evaluation relied on discerning eyesight.
The quasi-experimental design used in the study was well suited to evaluation of landscape workers on company time. Company owners were enthusiastic about time spent training but less supportive of time used to test the evaluation. Conclusions drawn from the study rest on the assumption that the non-randomly chosen groups (the workers in different companies) are similar. No questions about education, age, and country of origin were asked. These questions may be culturally inappropriate to ask and measure in a work situation, particularly with this population. However, the qualitative data reported herein (consistent misinterpretation of specific pictures, difficulty seeing pictures, and the need for classroom skills) also confirm the conclusion that the pictorial evaluation instrument was ineffective, supporting the conclusion drawn from the quantitative study.
Conclusions
The purpose of the study reported here was to develop and evaluate an assessment instrument consisting of pictures that could be used to determine knowledge gained without requiring reading skills. The pictorial evaluation instrument did not meet this goal. Additional research is needed on best practices when developing and using pictorial evaluations with audiences.
These evaluation materials do have appropriate application in teaching. They were found to be strong review and discussion tools and could be used for group evaluations to collect qualitative data.
Worker safety is a top priority in the landscape industry. Measuring and documenting the impact of safety training, which provides lifesaving information, is important and an area in need of further study. There is much to be learned about developing and administering pictorial evaluations to non-English speaking populations with varying levels of literacy and classroom skills.
Acknowledgments
We wish to thank Jay Bauer, Senior Graphic Designer, College of Agriculture and Environmental Sciences, University of Georgia, for his work on the illustrations used in the study. The study was produced under grant number SH-19493-09-60-F-13 Occupational Safety and Health Administration, U.S. Department of Labor. It does not necessarily reflect the views or policies of the U.S. Department of Labor, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.
References
Anger, W. K., Stupfel, J., Ammerman, T., Tamulinas, A., Bodner, T., & Rohlman, D. S. (2006). The suitability of computer-based training for workers with limited formal education: A case study from the US agricultural sector. International Journal of Training & Development, 10(4), 269-284.
Arcury-Quandt, A. E., Gentry, A. L., & Marín, A. J. (2011). Hazardous materials on golf courses: Experience and knowledge of golf course superintendents and grounds maintenance workers from seven states. American Journal of Industrial Medicine, 54(6), 474-485.
Bauske, E. M., Martinez-Espinoza, A.D., Maqueda, K., & Chance. W. (2008). Safety "pays" for Hispanic employees, company owners, and Extension professionals active in urban agriculture industries. Journal of Extension [On-line], 46(06) Article 6TOT2. Available at: http://www.joe.org/joe/2008december/tt2.php
Bedwell, W. L., & Salas, E. (2010). Computer-based training: Capitalizing on lessons learned. International Journal of Training & Development, 14(3), 239-49.
Bureau of Labor Statistics (2010). National census of fatal occupational injuries in 2010. Retrieved from: http://www.bls.gov/news.release/pdf/cfoi.pdf
Burke, M. J., Sarpy, S. A., Smith-Crowe, K., Chan-Serafin, S., Salvador, R. O., & Islam, G. (2006). Relative effectiveness of worker safety and health training methods. American Journal of Public Health, 96(2), 315-24.
Central Intelligence Agency (2012). Field listing: literacy. Retrieved from: https://www.cia.gov/library/publications/the-world-factbook/fields/2103.html
Ennis, S. R., Rios-Vargas, M., & Albert, N. G. (2010). The Hispanic population: 2010. Retrieved from: http://www.census.gov/prod/cen2010/briefs/c2010br-02.pdf
Evia, C. (2011). Localizing and designing computer-based safety training solutions for Hispanic construction workers. Journal of Construction Engineering and Management, 137(6), 452-459.
Houts, P. S. (2006). The role of pictures in improving health communication: a review of research on attention, comprehension, recall, and adherence. Patient Education and Counseling, 61(2), 173.
Kawakami, T., Kogi, K., Toyama, N., & Yoshikawa, T. (2004). Participatory approaches to improving safety and health under trade union initiative—experiences of positive training program in Asia. Industrial Health, 42(2), 196-206.
Lighthouse International (2010). Use of corrective eyewear. Retrieved from: http://www.lighthouse.org/research/statistics-on-vision-impairment/use-of-corrective-eyewear/
Medhi, I., Prasad, A., & Toyama, K. (2007). Optimal audio-visual representations for illiterate users of computers. Paper presented at the Proceedings of the 16th international conference on World Wide Web, Banff, Alberta, Canada.
Rossi, P. H., Lipsey, M. W., & Freeman, H. W. (2004). Evaluation: A systematic approach (7th ed.). Newberry Park, CA: Sage Publications.
Tapp, L. M. (2008). No reading or writing required: Safety training activities for everyone. ASSE Professional Development Conference and Exhibition. Retrieved from: http://assevirtualsymposium.pbworks.com/f/2008-689tapp.pdf