August 2007 // Volume 45 // Number 4 // Feature Articles // 4FEA5
An Exploratory Profile of Extension Evaluation Professionals
Abstract
Extension evaluators serve important roles within our organization, given the increased emphasis on program accountability and renewed focus on program evaluation within the Extension system at all levels. What are the main roles and responsibilities of Extension evaluators? What is the nature and scope of their work? What is their academic preparation? How do they receive continued professional development and training? What is the organizational context in which they work? Prior to the study reported here, little was known about Extension evaluators. The exploratory study provides some insight and, more important, raises significant questions for future study of Extension evaluators.
Introduction
Extension, like many other organizations, is working to build evaluation capacity of faculty and staff at all levels of the organization. This is driven, in large part, by an increased emphasis on accountability, reporting program outcome and impacts, and evidence-based policies. Stevenson, Florin, Mills, and Andrade (2002) discuss how developing internal evaluation capacity within organizations is important for several reasons, such as accountability demands of funding sources and boards, acquiring new funding for existing programs via competitive applications, and obtaining formative and summative feedback for program managers.
Perhaps in response to this focus on accountability and program improvement, an increasing number of state-level Extension evaluation specialists are cropping up across the nation. Some of these specialists serve specific program areas such as 4-H youth development, family and consumer sciences, or specific disciplines in agriculture. It is becoming increasingly common for states to hire Extension evaluation specialists of this type who are located within departments or program units. In some cases, the role of "evaluation specialist" is an add-on to existing responsibilities for subject matter content. However the predominant model is for an evaluation specialist to work with faculty and staff from all program areas. Work done by these specialists ranges from providing technical assistance and training to personally conducting evaluation projects.
With the focus on evaluation and the increasing numbers of "evaluators" in the Extension system nationwide, there emerged a need for these Extension professionals to communicate through a collegial network. In 1986, the Extension Education Evaluation Topical Interest Group (EEE-TIG) was formed as a work group under the auspices of the American Evaluation Association. The goals of the EEE-TIG are:
- To promote the professional development of evaluators working within the Cooperative Extension system and in other nonformal education organizations.
- To improve evaluation performance through a better understanding of the unique contexts of evaluation in various informal education and technology transfer settings.
- To recognize and enhance the relationship between the functions of program evaluation, program planning, staff development, and organization development in Extension and informal education.
- To provide and promote opportunities for communication and the sharing of evaluation theories, issues, approaches, and practices in Extension and informal education.
- To encourage exemplary evaluation practice in the field of Extension education.
The EEE TIG serves as a catalyst to bring together Extension evaluation specialists, whose numbers have grown rapidly over the last several years. Currently, there are over 160 members of the EEE TIG from different states and territories. Very little is known about this group of individuals aside from their names, university affiliations, and job titles. How are Extension evaluators helping to build evaluation capacity within their state Extension systems?
According to Ristau (2001) effective capacity building efforts utilize multiple learning formats to equip individuals with the knowledge and skills needed to better evaluate their programs. Ristau (2001) proposes that evaluation capacity builders need to make didactic presentations on evaluation, hold discussion groups to discuss specific evaluation problems and issues, provide direct on-site technical assistance, and offer follow-up consultation to individuals within the organization. Which of these capacity building strategies are Extension evaluation professionals utilizing? How well prepared are Extension evaluation professionals to carry out these strategies? Answers to these and other questions were unknown as there was no study in the literature that examined these questions.
The purpose of the exploratory study reported here was to describe the roles these EEE TIG Extension evaluators carry out within their organizations, their level of academic preparation for those roles, nature and scope of their work, and the organizational context in which they work. The findings from this study will help the Extension system gain better insight on our evaluation capacity building, as well as the roles that internal evaluators (Love, 1991) play within individual organizations and the system as a whole.
Methods
The authors created a questionnaire for Extension evaluators, focusing on their roles, professional preparation, and the type of work they are asked to do within their organizations. Members of the 2004-2005 EEE TIG board provided input into subsequent drafts of the questionnaire, which asked both open and closed-ended questions. Using Zoomerang, a survey tool that delivers an email invitation to participation in an electronic survey, a census of the 168 members of the Extension Education Evaluation (EEE TIG) listserv of the American Evaluation Association was attempted during the summer of 2005.
Forty-two Extension evaluation professionals participated in the survey. This represented a 25% response rate. However, it is known that some of the 168 individuals on the EEE-TIG listserv do not work for Extension but are employed by other nonformal organizations. Others hold exclusively administrative appointments in Extension.
Findings
Roles and Responsibilities of Extension Evaluators
Respondents were asked to choose three activities that comprise a majority of their work as Extension evaluators. The most frequent responses were providing technical assistance on a specific element of the evaluation (method, instrument, etc.), managing or conducting the evaluation, or serving as an evaluator on a team.
As an evaluator on a team | 17 | 44% |
Called on to provide technical expertise on a specific thing (method, instrument, etc.) | 29 | 74% |
Coaching or mentoring | 16 | 41% |
Institutional research (Evaluation studies on organizational development functions) | 6 | 15% |
For-credit courses | 5 | 13% |
Non-credit courses (Training or in-service/session teaching) | 9 | 23% |
Supervising, managing, coordinating evaluation efforts | 18 | 46% |
Managing, conducting evaluation | 24 | 62% |
Other | 3 | 8% |
Nature and Scope of Work of Extension Evaluators
Respondents were asked to indicate when they are most likely to be invited into a program development process. They were provided a list of eight of possible points of entry. Forty percent (17) of the respondents indicated that the point of "evaluation design" is where they were most likely to be brought in, followed by "developing evaluation questions" (13 of 42, or 31 %). Five respondents (12%) reported "theory and logic model development," and five respondents (12%) reported "evaluation methods." Very few respondents indicated the polar ends of the presented continuum: engaging stakeholders at one end or communication of evaluation results at the other. Table 2 provides detailed finding on the point in the program development process at which Extension evaluators are approached to become a part of the process.
Engaging stakeholders | 1 | 2% |
Situation analysis | 0 | 0% |
Theory and logic model development | 5 | 12% |
Developing evaluation questions | 13 | 31% |
Evaluation design | 17 | 40% |
Evaluation methods | 5 | 12% |
Analysis of data | 1 | 2% |
Communication of evaluation results | 0 | 0% |
Total | 42 | 100% |
Motivators of Engagement
Respondents were also asked to indicate what they felt were the most frequent motivating factors of the people who have extended invitations to participate or give input into evaluations.
The top three factors that were cited as a "big influence" are:
- Pressure from a funder.
- Pressure to document impact by administration or supervisor.
- Questions about specific evaluation methods, support, help.
The top three factors that were cited as "somewhat of an influence" are:
- A desire to improve the program.
- Someone recommended that they contact an evaluator.
- The need to document program impact (program in jeopardy).
The top three factors that were cited as "not an influence" are:
- A desire to learn.
- Tenure or promotion documentation needed.
- Someone recommended that they contact an evaluator.
Motivation | Not an Influence | Somewhat of an Influence | Big Influence |
Pressure from a funder | 8%
3 | 30%
12 | 63%
25 |
Tenure or promotion documentation needed | 41%
16 | 28%
11 | 31%
12 |
The need to document program impact (program in jeopardy) | 20%
8 | 55%
22 | 25%
10 |
Pressure to document impact by administration or supervisor | 8%
3 | 33%
13 | 60%
24 |
A desire to improve the program | 13%
5 | 65%
26 | 23%
9 |
A desire to learn | 45%
18 | 48%
19 | 8%
3 |
Questions about specific evaluation methods, support, help | 15%
6 | 44%
17 | 41%
16 |
Someone recommended they contact an evaluator | 30%
12 | 55%
22 | 15%
6 |
Educational Preparation of Extension Education Evaluators
As Table 4 illustrates, of 42 respondents, 17 (40%) had experienced 1-2 courses labeled "evaluation." Ten (24%) had training in research methods but no courses in evaluation. Seven (17%) had a minor, certificate, or track in evaluation from a higher education institution, and seven (17%) had an academic degree specifically in evaluation. One respondent had no formal coursework in evaluation.
Never had formal coursework in evaluation | 1 | 2% |
Had training in research methods but no courses in evaluation | 10 | 24% |
1 — 2 courses labeled "evaluation" | 17 | 40% |
Minor, certificate, or track from a higher education institution | 7 | 17% |
An academic degree specifically in evaluation | 7 | 17% |
Total | 42 | 100% |
Professional Development and Training
Respondents were asked to describe the influence of various outlets in their professional development and were given these choices: "no experience"; "experience/little influence"; "experience/some influence"; or "experience/great influence." Respondents indicated that on-the-job experience and independent study or reading had been their experiences with the greatest influence (79% and 57% of the respondents, respectively). The AEA training (pre-sessions with 43% and general conference with 40%) and being mentored (41%) were also considered experiences with some influence. Those professional enhancement outlets that many respondents described as having no experience with were The Evaluation Center at Western Michigan (95%), the Evaluators Institute (70%), on-line curricula (50%), and EVALTALK (45%).
Professional Development Experience | No Experience | Experience/Little Influence | Experience/ Some Influence | Experience/ Great Influence |
Evaluators Institute session | 70%
28 | 0%
0 | 15%
6 | 15%
6 |
Western Michigan The Evaluation Center session | 95%
38 | 5%
2 | 0%
0 | 0%
0 |
On-the-job experience | 0%
0 | 0%
0 | 21%
9 | 79%
33 |
American Evaluation Association pre-sessions | 33%
14 | 14%
6 | 43%
18 | 10%
4 |
American Evaluation Association general conference | 17%
7 | 17%
7 | 40%
17 | 26%
11 |
Independent study, reading | 0%
0 | 7%
3 | 36%
15 | 57%
24 |
Being mentored | 15%
6 | 24%
10 | 41%
17 | 20%
8 |
On-line curricula | 50%
20 | 28%
11 | 18%
7 | 5%
2 |
EVALTALK, American Evaluation Association listserv | 45%
18 | 33%
13 | 15%
6 | 8%
3 |
Other professional associations | 20%
8 | 38%
15 | 28%
11 | 15%
6 |
Organizational Context in which Extension Evaluators Work
Of 42 respondents, 25 (60%) reported that they are responsible for evaluation Extension-wide at their institutions; nine (21%) reported that their duties are specific to a program area; and eight (19%) reported "other." We did not probe about the content of "other" responses on the survey.
Of 40 respondents, 18 (45%) reported that the primary group they support is field faculty and staff; 16 (40%) reported that their primary group is state-level faculty; and staff and six (15%) reported supporting administrators.
Of 40 respondents, 24 (60%) hold a faculty position, and 16 (40%) hold a professional staff position.
On an organizational chart, 37% of the respondents position tended to be located within a program development and evaluation unit (15 of 41 respondents), followed by 24% (10) being located in administration, 22% (9) located in an academic department, and 17 (7) in a program area or programming group.
Of 40 respondents, 23 (57%) did not have the primary responsibility for preparing federal plans of work and reports, and 17 (43%) did.
Respondents were asked how many full-time equivalents were currently dedicated to support evaluation in their state's Extension system. Answers ranged from "not described this way" (10 respondents) to zero (six respondents) to four (one respondent). The modal response was one FTE, with eight respondents giving this answer.
Limitations of the Study
Low response rate (25%) is a key limitation of the study. "If a high response rate is achieved, there is less chance of significant response bias than in a low rate"(Babbie, 2001, p.256).
The census of the population of professionals on the Extension Education Evaluation Topical Interest Group (EEE TIG) listerv provides us with the ability to discuss findings for our respondents, but no ability to generalize to a larger group.
The EEE TIG listserv includes evaluators who do not work for Extension. The survey for this study included instructions to NOT complete the instrument if the individual does not work for Extension. Also, the listserv may not contain all evaluators who work for Extension.
Given that the study focused solely on Extension evaluators, the EEE TIG listserv could be screened to select only those evaluators who work for Extension. In addition, this pre-screened list could be cross-referenced with records from personnel offices at each state Extension office. While examining EEE TIG lists was an excellent way to begin to study a group that had not been previously studied, this method of refining the list would allow the researchers to expand the study to potentially include all Extension evaluators nationwide even if they do not belong to the EEE TIG.
Once the survey has been emailed to the Extension evaluators, then additional follow-up emails reminding potential respondents to complete the survey can be sent. Two such reminders were sent as a part of the study reported here. Also, incentives could be identified and provided for completing the survey. No incentives were provided for participation in this study.
An addition to the study may be to interview the managers and program teams that work with Extension evaluators to determine their perceptions of the roles of evaluators in their organization--where evaluators enhance outreach work, where working relationships could be further developed, and where evaluators' conclusions are deemed influential.
Discussion and Recommendations
Most evaluators would agree that discussions about program evaluation should begin early in the process of program development. Even early discussions with stakeholders can provide important information about what should be evaluated. Furthermore, without a sound program theory, the likelihood of a program producing its intended results is, at best left to chance. If a program theory has, in fact, been developed, it is of little use if it has not been effectively communicated to those involved with the program. Evaluation specialists, especially those with expertise in program design, can be of great assistance during these phases of program development. But, not surprisingly, findings of the study reported here suggest that evaluators are not asked for their assistance until questions about evaluation design and methodologies arise. By that time, it may be too late to design an effective evaluation.
- To improve evaluation efforts in Extension, evaluators must strategize ways to become engaged with programmers earlier in the program development process.
According to the respondents, the primary factors motivating program staff to contact them for assistance is pressure from an administrator or funder to document program results. Certainly some of the programs under scrutiny are those that have been in existence for many years. Consequently, there may have been little opportunity to engage an evaluator early on. It is more likely, however, that program staff are unaware of how early involvement of an evaluator can enhance not only the program evaluation, but the program itself.
- Evaluators know how to ask important questions during early stages of program design that improve the soundness of the program. Do Extension educators understand this about their organizations' evaluators?
It is disturbing that the desires to improve a program or learn more about how a program operates were only minimal influences driving the decisions by program staff to seek assistance with evaluation. Such a finding suggests that conducting a program evaluation is looked upon more as an issue of compliance than an opportunity for growth. In addition to opportunities for program improvement and personal growth, evaluation results also provide important information worthy of sharing through publications and presentations. Cumulatively, evaluation results contribute to the body of knowledge that under girds professional practice.
- Evaluators must continue to help program staff fully appreciate the merits of conducting sound evaluations, as well as how to best use the results of sound evaluation.
According to the respondents, Extension evaluators tend to be "converts" from other disciplines. The majority of evaluators have taken one or two courses in either evaluation or research methods and relied upon professional conferences, on-the-job learning, or independent study to build their competence as an evaluator. It is important to realize, however, that evaluation as a field of study is relatively new. Until recently, formal courses, let alone institutions offering undergraduate or graduate programs evaluation, were rare. So it is not surprising that many Extension evaluators lack formal course work in evaluation. It is encouraging, however, that fully one-third of the respondents had either a degree, minor, or certificate in evaluation from an institution of higher education.
- Extension evaluators must continue to promote the development of academic programs that focus on evaluation. It is also important to involve both undergraduate and graduate students in real world evaluative studies that demonstrate the importance of evaluation in today's world.
It is also interesting to note that the majority of Extension evaluators work Extension-wide. That is, they do not serve a single program area. They tend to be housed in a separate program evaluation or administrative unit. A small number of Extension evaluators do, in fact, serve only a single program area and are housed with specialists from that program area. Regardless of the scope of their responsibility, Extension evaluators tend to work equally with state and county staff in support of their evaluation activities.
Is it better to house evaluation expertise within program area groups? Or is a centralized program development and evaluation unit desirable? Should evaluators also have other responsibilities (teaching, programming, administration), or should they devote all of their time to supporting evaluation? Does the placement of evaluation specialists within administrative units affect how Extension staff view program evaluation? It is interesting to note that almost half of Extension evaluators have responsibility for preparing federal plans and reports to CSREES.
- Additional research is needed to explore how placement of evaluators within Extension organizations as well as their specific responsibilities are related to both evaluation capacity and perceptions of the evaluation function.
Conclusion
Like most exploratory studies, this study served to raise as many questions as were answered. However, this research did provide some information on a group that had not previously been studied.
References
Babbie, E. (2001). The practice of social research. Belmont, CA: Wadsworth Publishing.
Love, A. (1991). Internal evaluation. Newbury Park, CA: Sage.
Ristau, S. (2001). Building organizational capacity in outcome evaluation: A successful state association model. Families in Society, 82(6), 555-560.
Stevenson, J.F., Florin, P., Mills, D.S., & Andrade, M. (2002). Building evaluation capacity in human service organizations: A case study. Evaluation and Program Planning, 25, 233-243.