The Journal of Extension - www.joe.org

August 2019 // Volume 57 // Number 4 // Feature // v57-4a1

Program Evaluation Challenges and Obstacles Faced by New Extension Agents: Implications for Capacity Building

Abstract
In this era of accountability, Extension agents are expected to evaluate their programs for accountability. New Extension agents are not exempt from this expectation. If they lack evaluation capacity, this scenario can contribute to frustration and burnout. Therefore, it is paramount to explore new Extension agents' evaluation challenges and obstacles to find ways to help them build evaluation capacity. We used a modified Delphi study approach to identify and describe the most important challenges and obstacles faced by early-career Extension agents. The study panel demonstrated consensus on 26 program evaluation challenges and seven program evaluation obstacles. The findings may inform regional collaboration for evaluation competency building and promote meaningful discussions that move support beyond the status quo.


John Diaz
Assistant Professor and Extension Specialist
Department of Agricultural Education and Communication
University of Florida
Lake Wales, Florida
john.diaz@ufl.edu

Anil Kumar Chaudhary
Assistant Professor
Department of Agricultural Economics, Sociology, and Education
The Pennsylvania State University
University Park, Pennsylvania
auk259@psu.edu

K. S. U. Jayaratne
Professor and State Leader for Extension Evaluation
Department of Agricultural and Human Sciences
North Carolina State University
Raleigh, North Carolina
jay_jayaratne@ncsu.edu

Laura A. Warner
Assistant Professor and Extension Specialist
Department of Agricultural Education and Communication
University of Florida
Gainesville, Florida
lsanagorski@ufl.edu

Introduction

Professional development is integral to the advancement of Cooperative Extension's human resources. Extension often uses competency development models to focus training activities around core competencies (Brodeur, Higgins, Galindo-Gonzalez, Craig, & Haile, 2011). Lucia and Lepsinger (1999) defined competency as

a cluster of related knowledge, skills, and attitudes that affects a major part of one's job (a role or responsibility), that correlates with performance on the job, that can be measured against well-accepted standards, and that can be improved via training and development (p. 2).

The original use of competencies was devised by McClelland (1973) as an alternative to intelligence tests. He argued that intelligence tests were not valid for measuring knowledge and skills for the workplace (McClelland, 1973). According to Harder, Place, and Scheer (2010), McClelland's competency approach was underpinned by four primary assumptions:

(a) performance measures should be observable, (b) criteria should relate to life outcomes such as occupations and education, (c) competencies should be described and defined realistically, and (d) clearly articulated information on how to develop competencies should be made public (p. 45).

Program evaluation is a core competency needed by Extension agents (Suvedi & Kaplowitz, 2016). For improvement and accountability purposes, Extension educators are expected to evaluate the process of education delivery and measure the learner's ability to achieve intended outcomes.

The challenge for those who design and deliver professional development for Extension educators is that no two Extension educators are the same. They are professionals with varied areas of subject matter expertise and experiences that determine their individual training needs (Knowles, Holton, & Swanson, 2005). Radhakrishna and Martin (1999) conducted a survey in South Carolina to understand program evaluation and accountability in-service training needs of Extension agents. They found that the biggest areas of need surrounded the topics of developing evaluation plans, focusing and organizing evaluations, designing questions and surveys, preparing evaluation reports, and using evaluation results. Typically, program evaluation is a skill set that many Extension educators build once hired, making it important to understand what program evaluation challenges persist following onboarding and new hire training to inform in-service trainings (Radhakrishna & Martin, 1999). McClure, Fuhrman, and Morgan (2012) also assessed the evaluation competency needs of Extension educators, in Georgia, and later segregated the needs according to years of Extension experience. For newer Extension agents (those having worked in Extension for 5 years or less), the greatest areas of need related to writing clear questions for a questionnaire intended for youths younger than 12 years old, analyzing questionnaire data collected, and writing about evaluation findings in an impact statement. More recently, Kumar Chaudhary's (2017) study of natural resources management educators in Florida showed that only limited numbers of Extension educators are able to differentiate short-, medium-, and long-term outcomes; identify indicators; design and deliver follow-up surveys; and conduct data analysis.

There have been multiple studies focused on identifying key evaluation competencies needed by Extension agents (e.g., Boyd, 2009; Bruce & Anderson, 2012; Ghimire & Martin, 2013; Kumar Chaudhary, 2017; McClure et al., 2012), but the most robust taxonomy for evaluation competencies is from Rodgers, Hillaker, Haas, and Peters (2012). They used the taxonomy developed by Ghere, King, Stevahn, and Minnema (2006) to organize Extension evaluation competencies. This taxonomy includes 41 specific evaluation competencies in the three domains of situation analysis, systematic inquiry, and project management.

Lamm, Israel, and Diehl (2013) discussed practical consequences resulting when program evaluation competencies are not developed. They explained that most Extension agents only use posttests administered after an educational activity to evaluate success (Lamm et al., 2013). According to Lamm et al. (2013), Extension agents may lack the competency to develop plans that measure long-term change or conduct advanced statistical analysis, resulting in evaluative focus on participation and participant reaction.

Currently, the research that exists on program evaluation challenges for Extension agents does not consider challenges based on tenure within the organization (e.g., Kumar Chaudhary, 2017; Rodgers et al., 2012) or is restricted to a single state (McClure et al., 2012; Radhakrishna & Martin, 1999), meaning that a critical gap exists in the literature. The challenge that manifests from the combination of a new agent's lack of program evaluation expertise and the difficulty and time requirements of program evaluation represents an important area of exploration for professional development. As a result, it is necessary to understand the most pervasive challenges that newer agents, in particular, face in evaluating their programs to ensure that onboarding and in-service trainings can be tailored to effectively develop their evaluation competencies.

Purpose and Objective

Our purpose with the study described in this article was to identify and describe the most pervasive challenges and obstacles newer Extension agents face in their program evaluation efforts. The objective was to develop consensus regarding those challenges and obstacles so that Cooperative Extension organizations can provide appropriate support and training.

Methods and Data Sources

We used a modified Delphi study approach comprised of three distinct rounds (Warner, 2015) to identify and describe the most important program evaluation challenges and obstacles faced by early-career Extension agents. The study was approved by the University of Florida Institutional Review Board for Human Subjects Research and was conducted in the spring and summer of 2018. We used the Delphi approach because it provides a structured process for developing consensus and identifying educational priorities across a large geographic area (Warner, 2015).

For the study, we operationalized new Extension agent as someone who had been employed for at least 1 but not more than 3 years. We developed an expert panel of county Extension educators (N = 30) with 1 to 3 years of experience working in various program areas in three Eastern states (10 educators from each state). We selected the states on the basis of our work in the states and the fact that educators from the states would represent three distinct and large Extension systems, a factor that would help us obtain diverse perspectives. The expert panel members were selected by Extension district directors and program leaders representing various program areas. Table 1 shows demographics of the panel with regard to program area and highest level of education achieved.

Table 1.
Expert Panel Demographics (N = 30)

Demographic component Percentage
Program area
Family and consumer sciences 35.7
Agriculture 32.1
Horticulture 14.3
Youth leadership development (i.e., 4-H) 10.7
Community and/or rural development 3.6
Natural resources and/or sea grant 3.6
Highest education level
Bachelor's degree 20.7
Some graduate school 6.9
Master's degree 70.0
Doctoral degree 3.4

The first round of the Delphi study consisted of two open-ended questions asking the participants to list the program evaluation challenges and the program evaluation obstacles they faced as newer Extension agents (Table 2). All 30 expert panelists responded to the first-round survey.

Table 2.
Round 1 Survey Questions

Question # Question text
1 Please list all of the program evaluation challenges that you have faced as a newer Extension agent. (program evaluation task(s) or situation(s) that really tests your abilities)
2 Please list all of the program evaluation obstacles that you have faced as a newer Extension agent. (something that blocks your way or prevents or hinders progress)

We used a three-step constant comparative method (Glaser, 1965) to analyze the responses from the first-round survey to develop items for the second-round survey. First, we assessed the data line by line and assigned codes with temporary categories, and then we recoded until categories became well defined. We examined the individual categories to establish meaningful relationships with other categories. Through this process, we generated a list of challenges and a list of obstacles. We used group coding throughout the process, with three researchers coding together to develop the initial themes. The results of analysis were then disseminated to a researcher external to our research team for review and feedback. This process resulted in the identification of 36 challenges and 13 obstacles from the first round of responses.

In the second round, we provided the lists of challenges and obstacles to the expert panel members and asked them to rate the importance of addressing each challenge and obstacle on a 5-point Likert-type scale (1 = extremely important, 2 = very important, 3 = somewhat important, 4 = slightly important, 5 = not important at all). We defined consensus a priori as two thirds of the group's identifying a challenge or an obstacle as extremely important or very important (Warner, 2015). For the second round, we obtained a response rate of 93% (n = 28), and the expert panel demonstrated agreement on 29 challenges and eight obstacles. The group also identified one new challenge and one new obstacle to be included in the third round.

In the third and final round, we provided the shortened lists of challenges and obstacles to the expert panel members and asked the panelists to rate each item as they had done in the second round. According to Hsu and Sanford (2007), this is an important part of the Delphi process because it allows for the opportunity to record changes in perception. With a response rate of 97% (n = 29), we achieved consensus on 27 challenges and seven obstacles in the final round. Because the panel did not include Extension agents from all states, the reader should consider the panel members' contexts when making judgments of the applicability of study findings.

Results

All expert panelists indicated that determining program impacts and how to measure those was an extremely or very important challenge for newer Extension agents (Table 3). Additionally, the panel agreed that the following four challenges were next most important to address, as indicated by the percentages who agreed that they were extremely or very important: (a) development of accurate evaluation instrument for a given situation, (b) evaluating newly developed programs, (c) management and analysis of data collected, and (d) evaluating long-term impacts of Extension programming (Table 3).

Table 3.
Important Challenges Faced by Newer Extension Agents (N = 29)

Item Percentagea
Determining program impacts and how to measure those 100.00
Development of accurate evaluation instrument for a given situation 89.66
Evaluating newly developed programs 89.66
Management and analysis of data collected 89.65
Evaluating long-term impacts of Extension programming 89.60
Developing goals and objectives 86.21
Understanding how to integrate evaluation into Extension programming 86.21
Challenges with the evaluation reporting system (i.e. reporting outcomes, structure, time frame of reporting) 86.21
Managing the limited time available for evaluation with the demand for evaluation work 86.21
Reporting on evaluation results 86.21
Understanding what outcomes can be reported in multiple areas 85.71
Difficulty in designing evaluation and collecting evaluation data from the participants of site visits, field days, exhibits, farm demonstrations, etc. 82.76
Evaluating behavior change 82.76
Lack of understanding of evaluation techniques and where it is best to use them 82.76
Maintaining engagement in evaluation among participants and staff that have done it many times before 82.76
Evaluating cost saving or return on investment 79.31
Getting Extension participants to respond to evaluation surveys 79.31
Getting in touch with participants for receiving feedback 79.31
Connecting evaluation to statewide initiatives and priorities 79.31
Identifying impact indicators 75.87
Conducting pretest, posttest evaluation 75.86
Development and implementation of follow-up evaluation 75.86
Evaluating participants that have already adopted the intended behavior/practice 75.86
Measuring how Extension program prevented unwanted outcomes (e.g. reduced childhood obesity) 74.97
Disseminating evaluation results to key stakeholders such as federal and state agencies as well as other organizations 72.42
Evaluating programs that have an extensive set of expected outcomes 72.42
Attaining acceptable participation to strengthen evaluation results 72.42
aPercentage indicates respondents who selected extremely important or very important.

The panel indicated that lack of evaluation mentorship was the most important obstacle faced by newer Extension agents (Table 4). Additionally, over 70% of the panel agreed that the following obstacles were extremely or very important to address: (a) lack of clear expectations and guidance from supervisor (e.g., county and district Extension directors) for evaluation, (b) lack of evaluation training, (c) lack of data to translate impact from behavior change, and (d) lack of good validated and standardized evaluation tools (Table 4).

Table 4.
Important Obstacles Faced by Newer Extension Agents (N = 29)

Item Percentagea
Lack of evaluation mentorship 86.20
Lack of clear expectations and guidance from supervisor for evaluation 82.76
Lack of evaluation training 82.76
Lack of data to translate impact from behavior change 79.31
Lack of good validated and standardized evaluation tools 72.42
Lack of institutional knowledge transfer from past extension educators 68.97
Lack of program participants' willingness to complete evaluations 68.96
aPercentage indicates respondents who selected extremely important or very important.

Conclusions, Implications, and Recommendations

Some of the challenges we identified mirror those found by Radhakrishna and Martin (1999), Lamm et al. (2013), and Kumar Chaudhary (2017) in their respective studies. Accordingly, our findings confirmed that some of the program evaluation challenges faced by new Extension agents have remained as persistent concerns for approximately two decades. In general, Extension agents struggle with developing evaluation plans and instruments to assess outcomes and analyze long-term impacts of their programs. It is not surprising, then, that the expert panel of newer Extension educators in our study unanimously ranked determining and measuring program impact as their most important program evaluation challenge.

Our project resulted in consensus on the most pervasive challenges that exist across multiple states and broadened the bounds of initial claims that may have been limited in scope. In addition, our process built consensus on evaluation obstacles new Extension agents face. These two facts highlight the unique contribution of our study in addressing the program evaluation issues faced by new Extension agents.

Newer Extension agents are concerned with both evaluating new programs and evaluating impacts of existing programs. Both tasks require development of a good evaluation plan, identification of indicators, design of surveys, and collection and analysis of data. All of these are major evaluation challenges that Extension professionals struggle to overcome. As Extension agents strive to meet the need of providing higher level impact data to fulfill the accountability requirement of federal and state reporting, new agents are compelled to evaluate their new programs for higher levels of outcomes and thus face these challenges. This situation highlights the need for helping new Extension agents learn how to plan evaluations, develop survey instruments, and analyze data.

The challenges identified through our study can be used to guide the development of program evaluation training in new-hire onboarding programs. The study results can be used to make refinements to existing approaches but also may inform supplemental in-service training to fill any gaps. The consensus achieved among the study panel members reinforces the need to prioritize these challenges during professional development planning.

The obstacles revealed by the study indicate structural and system-level impediments that may prevent newer Extension agents from developing necessary evaluation competencies. Because Extension agents typically are hired with immediate program evaluation training needs (Knowles et al., 2005), there is a need to provide an adequate system of support and guidance initially as the agents build their confidence and skills. Lack of good validated and standardized evaluation tools is a considerable barrier impeding new agents' ability to document impacts. The finding reaffirms the need to facilitate the collection of long-term impact data with validated and/or standardized tools to overcome the initial lack of expertise outlined by Lamm et al. (2013).

There is also a need for clear communication from supervisors regarding their evaluation expectations. It is possible that this obstacle is the result of somewhat of a paradox. Extension agents typically have unlimited freedom to develop creative programs that meet the needs of the communities they serve, yet at the same time they need to evaluate their programs in ways that contribute to standardized reporting formats. There may be perceptions among newer agents that there is only one right way to conduct program evaluations. This paradox reveals opportunities to showcase program evaluation strategies in the same way that creative programs are celebrated. Perhaps evaluation specialists could design local, regional, or national evaluation expos or conferences. Additionally, newer agents could be given access to a catalog of sound program evaluations that demonstrate the breadth of potential approaches.

The challenges we identified correspond with findings from previous studies, highlighting the significance of addressing such challenges in building the evaluation capacity of new agents. One may point to the structural and systematic obstacles that exist for Extension agents as a starting point for change. Our findings should serve as a foundation for taking practical measures to overcome challenges and obstacles faced by new agents when planning new-agent training programs. Also, the findings can be used as a guideline for individual Extension professionals in determining professional development plans for themselves. In addition to these implications, the process we used can be adapted locally for identifying any programmatic competency and subsequently creating needs-based professional development.

References

Boyd, H. H. (2009). Ready-made resources for Extension evaluation competencies. Journal of Extension, 47(3), Article 3TOT1. Available at: https://joe.org/joe/2009june/tt1.php

Brodeur, C. W., Higgins, C., Galindo-Gonzalez, S., Craig, D. D., & Haile, T. (2011). Designing a competency-based new county Extension personnel training program: A novel approach. Journal of Extension, 49(3), Article 3FEA2. Available at: https://www.joe.org/joe/2011june/a2.php

Bruce, J., & Anderson, J. (2012). Perception of the training needs of the newest members of the Extension family. Journal of Extension, 50(6), Article 6RIB5. Available at: https://www.joe.org/joe/2012december/rb5.php

Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006). A professional development unit for reflecting on program evaluator competencies. American Journal of Evaluation, 27(1), 108–123. doi:10.1177/1098214005284974

Ghimire, N. R., & Martin, R. A. (2013). Does evaluation competence of Extension educators differ by their program area of responsibility? Journal of Extension, 51(6), Article 6RIB1. Available at: https://www.joe.org/joe/2013december/rb1.php

Glaser, B. G. (1965). The constant comparative method of qualitative analysis. Social Problems, 12(4), 436–445.

Harder, A., Place, N. T., & Scheer, S. D. (2010). Towards a competency-based extension education curriculum: A Delphi study. Journal of Agricultural Education, 51(3), 44–52. doi:10.5032/jae.2010.03044

Hsu, C. C., & Sandford, B. A. (2007). The Delphi technique: Making sense of consensus. Practical Assessment, Research & Evaluation, 12(10), 1–8.

Knowles, M. S., Holton, E., & Swanson, R. (2005). The adult learner: The definitive classic in adult education and human resource development (6th ed.). Boston, MA: Elsevier.

Kumar Chaudhary, A. (2017). The effects of county and state faculty networking on the attitude toward evaluation and evaluation practices (Unpublished doctoral dissertation). University of Florida, Gainesville, Florida.

Lamm, A. J., Israel, G. D., & Diehl, D. (2013). A national perspective on the current evaluation activities in Extension. Journal of Extension, 51(1), Article 1FEA1. Available at: https://www.joe.org/joe/2013february/a1.php

Lucia, A. D., & Lepsinger, R. (1999). Art & science of competency models. San Francisco, CA: Jossey-Bass.

McClelland, D. C. (1973). Testing for competence rather than for intelligence. American Psychologist, 28(1), 1–14. Retrieved from http://servicelearning.msu.edu/upload/2.8.pdf

McClure, M. M., Fuhrman, N. E., & Morgan, A. C. (2012). Program evaluation competencies of extension professionals: Implications for continuing professional development. Journal of Agricultural Education, 53(4), 85–97. doi:10.5032/jae.2012.04085

Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability training needs of Extension agents. Journal of Extension, 37(3), Article 3RIB1. Available at: https://www.joe.org/joe1999/june/rb1.php

Rodgers, M. S., Hillaker, B. D., Haas, B. E., & Peters, C. (2012). Taxonomy for assessing evaluation competencies in Extension. Journal of Extension, 50(4), Article 4FEA2. Available at: https://joe.org/joe/2012august/a2.php

Suvedi, M., & Kaplowitz, M. (2016). What every extension worker should know—Core competency handbook. The U.S. Agency for International Development. Retrieved from https://meas.illinois.edu/wp-content/uploads/2015/04/MEAS-2016-Extension-Handbook-Suvedi-Kaplowitz-2016_02_15.pdf

Warner, L. A. (2015). Using the Delphi technique to achieve consensus: A tool for guiding Extension programs. Retrieved from http://edis.ifas.ufl.edu/wc183