The Journal of Extension - www.joe.org

February 2011 // Volume 49 // Number 1 // Research In Brief // v49-1rb2

The Evaluation Attitudes and Practices of 4-H Educators

Abstract
Extension educators are expected to conduct program evaluation. An Internet survey was sent to county 4-H educators in Ohio to examine their evaluation attitudes and practices, as well as barriers to conducting evaluation. Respondents indicated a range of attitudes about evaluation and limited use of different designs and methods. Having enough time was the greatest perceived barrier. Educators are encouraged to use a diversity of designs and methodologies and to cover a range of topics. Capacity building efforts should include clarity of expectations; opportunities for educators with different needs, interests, and prior experiences; and addressing barriers to evaluation.


Kristi S. Lekies
Assistant Professor
School of Environment and Natural Resources
lekies.1@osu.edu

Amanda M. Bennett
Graduate Research Assistant
Department of Human and Community Resource Development
bennett.709@osu.edu

The Ohio State University
Columbus, Ohio

Introduction

Evaluation has become a vital skill for Extension educators, who must report to funders about program effectiveness, identify best practices, and develop relevant educational efforts (Rennekamp & Engle, 2008). To increase and improve evaluation activities, state Extension programs have used capacity building efforts that help "individuals and teams develop the knowledge, skills, and motivation to evaluate their programs and communicate the results" (Boyd, 2009). Beneficial approaches have included inservice training on topics such as logic models, evaluation design, and data collection, as well as individual consultation and mentoring, collaborative evaluation projects, and communities of practice (Arnold, 2006; Davis, Burggraf-Torppa, Archer, & Thomas, 2007; Taylor-Powell & Boyd, 2008).

A challenge for state evaluation specialists developing training or other capacity building activities is to meet the needs of a diverse group of individuals. Educators come from different educational backgrounds and vary in the amount of formal evaluation training they have received, if any. They also hold mixed attitudes toward evaluation, with some seeing evaluation as unnecessary, low priority, and stressful, and others conducting evaluation on a regular basis as an integral part of university scholarship (Arnold, 2006; Douglah, Boyd, & Gunderman, as cited in Arnold, 2006 & Taylor-Powell & Boyd, 2008; Franz & Townson, 2008). Furthermore, many educators face time and resource constraints, competing priorities, and a lack of confidence, which create obstacles to implementing evaluation activities (Arnold, 2006).

Although countless evaluation studies have been conducted and published by Extension professionals, limited information is available that summarizes the purposes, methods, and designs used across studies. A recent meta-analysis of over 600 published evaluation studies in the Journal of Extension indicated program improvement, evidence of effectiveness, and needs assessment were the primary reasons for conducting evaluation. Methods and designs were limited, with the majority of studies using single point-in-time survey methods, and few using comparison or control groups or longitudinal assessment (Duttweiler, 2008). Similarly, Franz and Townson (2008), in a discussion of evaluation in Extension, noted limited evaluation designs and methods, little use of secondary data and control groups, and a focus on program outcomes.

The study reported here examined the attitudes and practices of Extension educators in one state, Ohio. While each state is unique in its programming and expectations, one state's findings can help guide evaluation capacity building efforts in Extension programs across the country. Greater insight into the specific attitudes and practices of educators than what is currently known can provide a starting point for evaluation specialists who plan evaluation training and help to determine at what level different opportunities are needed. The objectives were as follows:

  1. To understand educators' attitudes regarding program evaluation;

  2. To learn more about the specific reasons for conducting evaluation, evaluation topics addressed, and methodologies, designs, and dissemination strategies used;

  3. To understand overall levels of satisfaction with conducting evaluation; and

  4. To identify perceived barriers to conducting evaluation.

Sample and Method

An online survey was sent to all 4-H educators in Ohio (N=101) in Spring 2007. The respondents (n=62; 61% response rate) were 29% male and 71% female. Almost all (90%) completed their master's degree; 7% completed a doctoral degree. Employment varied from 2 months to 30 years, with an average length of approximately 7 years. The majority (66%) had a background in education, with the remainder from human development, business, parks and recreation, and other fields. All but seven (89%) had conducted an evaluation at least once.

Participants responded to five statements regarding their attitudes about evaluation (Cronbach's alpha = 0.80), several of which were adapted from the Attitudes Toward Research measure (Royalty, Gelso, Mallinckrodt, & Garrett, 1986). Responses ranged from 1=strongly disagree to 5=strongly agree.

Evaluation experiences covered reasons for evaluation, topics of evaluation, methods and designs, use of logic models, and dissemination strategies. Respondents used a checklist to indicate their reasons and topics. Methods and design questions examined the frequency of use of different methods and designs, and the use of logic models in evaluation planning. For these questions, the responses were 1=never, 2=once or twice, 3=3-5 times, 4=more than 5 times. Dissemination strategies assessed whether educators had experience with conference presentations and publishing fact sheets and peer-reviewed journal articles or books.

Satisfaction with evaluation was measured by one question that asked participants to rate their overall experience with evaluation. Responses ranged from 1=very negative to 5=very positive.

Finally, participants were asked to indicate if they had experienced any of 12 possible barriers to conducting evaluation (Cronbach's alpha = 0.89). Responses ranged from 1=not a barrier to 5=a strong barrier.

Results

Attitudes Toward Evaluation

Table 1 illustrates that approximately 70% of educators agreed or strongly agreed that their evaluations resulted in useful information. However, just over one half were clear what was expected of them, and a little less than half reported having a strong interest in doing evaluation; that the evaluation expectations for their job were reasonable; or they placed a high value on program evaluation in their careers.

Table 1.
Attitudes Toward Evaluation

Attitude StatementPercent Somewhat or Strongly AgreeingPercent NeutralPercent Somewhat or Strongly Disagreeing
I feel that the evaluations I do result in information that is useful.71%19%10%
I am clear about what is expected of me on my job in terms of program evaluation. 54%19%26%
I have a strong interest in doing program evaluation. 47%32%21%
The evaluation expectations for my job are reasonable.45%39%16%
I place a high value on program evaluation in my career. 44%32%25%
n=56-57
Percentages may not total 100% due to rounding.

Evaluation Experiences

The most frequent reasons for conducting evaluations were to identify outcomes or impact, for program improvement, and to report to funders and stakeholders, as reported by approximately 90% of respondents. Camping, camp counselor education, and volunteer training were evaluated by about 60% of the educators, followed by clubs, fairs, and school enrichment, which were examined by about one-third.

As shown in Table 2, surveys/questionnaires, attendance records, and observations were the methods used most frequently, and focus groups and document reviews were used the least. The most common design was collecting data at one time point, such as at the end of a program. Retrospective pre-post and complex designs with multiple follow-up or comparison groups were used infrequently or not at all. Over three-quarters of educators reported using logic models at least once in their evaluations, but only a small percentage used them on a regular basis.

Table 2.
Evaluation Methods and Designs Used by Extension 4-H Educators

 Percent Never UsingPercent Using Once or TwicePercent Using
3-5 Times
Percent
Using
5 Times or More
Method    
Surveys/Questionnaires0%2%20%78%
Attendance records2%6%15%77%
Observation2%7%15%76%
Individual interviews14%23%21%42%
Document review27%41%14%18%
Focus groups/group interviews29%29%28%14%
Design    
One time-point 4%0%21%76%
Pre-post9%40%21%30%
Retrospective pre-post40%38%9%13%
Multiple times 53%30%9%8%
Use of comparison groups70%26%4%0%
Use of Logic Models23%49%17%11%
n=49-54
Percentages may not total 100% due to rounding.

About 50% of the educators reported they had presented findings at conferences. About one-third had published a report or fact sheet. Eighteen percent had evaluation findings published in peer-reviewed journals or books.

Satisfaction with Evaluation

Overall, experiences with evaluation were mixed. Fourteen percent described their experiences as somewhat negative, 42% were neither negative nor positive, 29% were somewhat positive, and 15% were very positive. No one gave a rating of very negative.

Barriers to Evaluation

Table 3 illustrates that the top barrier to doing evaluation was time, with almost all respondents reporting it was a substantial barrier. Completing Institutional Review Board applications, data collection, response rates, knowing what to do, getting parental consent, and data analysis were other substantial barriers.

Table 3.
Barriers to Conducting Evaluation

Type of BarrierPercent Reporting Greater barriersPercent Reporting Fewer Barriers
Having enough time 91%2%
Having time to complete Human Subjects (IRB) applications 66%15%
Having assistance with data collection 56%18%
Getting enough people to respond to surveys 52%15%
Knowing what to do52%20%
Getting consent from parents48%23%
Knowing how to analyze data47%22%
Having people to turn to for consultation and assistance 44%22%
Having the right equipment (tape recorders, cameras, etc.) 42%29%
Knowing what questions to ask 43%32%
Knowing how to write up results39%30%
Being able to enter the data into the computer 33%47%
n=54-56

Discussion and Implications

Ohio 4-H educators in the study reported here were actively involved in evaluation. The majority viewed it as beneficial to their work. However, consistent with previous literature (Arnold, 2006; Franz & Townson, 2008; Douglah, et al., as cited in Arnold, 2006 & Taylor-Powell & Boyd, 2008), a range of attitudes was expressed. Of concern was that many did not feel clear about evaluation expectations or that they were reasonable. Past experiences with evaluation were mixed as well.

It is important to recognize the different perspectives and experiences of Extension educators and to clarify evaluation expectations. This can be done through new employee orientation and annual performance reviews. Furthermore, not all educators are interested in evaluation to the same extent. Creating a mix of opportunities, including small group activities and partnerships between new and experienced educators, or providing incentives such as recognition or conference travel may help to raise the level of enthusiasm and decrease stress.

Some topics, such as camping and volunteers, received a considerable amount of attention, while others, such as public speaking, were rarely addressed. This could reflect the individual educators' interests or activities. In Ohio, camping is a primary means of 4-H program delivery, and volunteers are utilized widely. Through collaborative approaches (Arnold, 2006; Davis et al., 2007; Taylor-Powell & Boyd, 2008), educators can address issues of importance to them locally, as well as learn more about these issues from a broader perspective or evaluate new topics of interest. Furthermore, logic models can be covered through additional training and follow-up activities to increase their use (Arnold, 2006; Davis et al., 2007). Providing assistance with publishing can help bridge the gap between the number of people who have presented at conferences and those who have published.

Specific barriers to evaluation can be noted. Time is indeed a problem for conducting evaluation. Working collaboratively with others across counties or sharing questionnaires used in local projects can reduce duplication of efforts and save time. In addition, completing the Institutional Review Board process, data collection and analysis, increasing response rates, and obtaining parental consent were key areas in which Extension workers needed support. These suggest broad needs for training or mentoring, developed for different levels of expertise.

The findings on evaluation practices suggest a need for greater variation in designs and methods. Additional research can determine whether the approaches educators used were due to a limited understanding of evaluation or resource constraints, or whether they were most appropriate given the educators' specific needs. More sophisticated evaluation studies can provide greater confidence in the program's impacts, but may not be appropriate given the demands of time, money, and other costs (Braverman & Engle, 2009; Braverman & Arnold, 2008). Consultation or mentoring with experienced evaluators can help educators examine the full range of options and choose the best approach (Arnold, 2006; Davis et al., 2007).

A limitation is that the survey was completed by approximately 60% of county educators. It is unknown whether their perspectives were similar or different from those who did not respond to the survey. However, the respondents reflected a broad range of educational backgrounds and years of experience.

Although the study reported here focused on the 4-H program in one state, the key findings should be considered in other settings. Clearly, Extension professionals are a diverse group and have differing needs and expectations. By understanding their views and experiences, more intentional training and other educational efforts can be planned.

References

Arnold, M. E. (2006). Developing evaluation capacity in Extension 4-H field faculty: A framework for success. American Journal of Evaluation, 27, 257-269.

Boyd, H. (2009). Practical tips for evaluators and administrators to work together in building evaluation capacity. Journal of Extension, [On-line], 47(2). Article 2IAW1. Available at: http://www.joe.org/joe/2009april/iw1.php

Braverman, M. T., & Arnold, M. E. (2008). An evaluator's balancing act: Making decisions about methodological rigor. In M.T. Braverman, M. Engle, M.E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 71-86.

Braverman, M.T., & Engle. M. (2009). Theory and rigor in extension program evaluation planning. Journal of Extension, [On-line]. Article 3FEA1. Available at: http://www.joe.org/joe/2009june/a1.php

Davis, G. A., Burggraf-Torppa, C., Archer, T. M., & Thomas, J. R. (2007). Applied research initiative: Training in the scholarship of engagement. Journal of Extension, [On-line], 45(2). Article 2FEA2. Available at: http://www.joe.org/joe/2007april/a2.php

Duttweiler, M. W. (2008). The value of evaluation in cooperative Extension. In M.T. Braverman, M. Engle, M.E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 87-100.

Franz, N. K., & Townson, L. (2008). The nature of complex organizations: The case of Cooperative Extension. In M.T. Braverman, M. Engle, M.E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 5-14.

Rennekamp, R. A., & Engle, M. (2008). A case study in organizational change: Evaluation in Cooperative Extension. In M.T. Braverman, M. Engle, M.E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 15-26.

Royalty, G. M., Gelso, C. J., Mallinckrodt, B., Garrett, K. D. (1986). The environment and the student in counseling psychology: Does the research training environment influence graduate students' attitudes toward research? The Counseling Psychologist, 14, 9-30.

Taylor-Powell, E., & Boyd. H.H. (2008). Evaluation capacity building in complex organizations. In M.T. Braverman, M. Engle, M.E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 55-69.