The Journal of Extension - www.joe.org

August 2012 // Volume 50 // Number 4 // Feature // v50-4a2

Taxonomy for Assessing Evaluation Competencies in Extension

Abstract
Evaluation of public service programming is becoming increasingly important with current funding realities. The taxonomy of evaluation competencies compiled by Ghere et al. (2006) provided the starting place for Taxonomy for Assessing Evaluation Competencies in Extension. The Michigan State University Extension case study described here presents a field-tested and reliable survey to measure evaluation competencies of Extension professionals in three domains (situational analysis, systematic inquiry, and project management) as well as indicates opportunities for professional development training themes to enhance the evaluation competencies of Extension academic professionals.


Michelle S. Rodgers
Associate Dean and Director
Cooperative Extension and Outreach
University of Delaware Extension
Newark, Delaware
mrodgers@udel.edu

Barbara D. Hillaker
Research and Evaluation Specialist
David P. Weikart Center for Youth Program Quality
Ypsilanti, Michigan
Barbara@cypq.org

Bruce E. Haas
Reporting Coordinator
Michigan State University
East Lansing, Michigan
haasb@msu.edu

Cheryl Peters
Evaluation Specialist
Michigan State University
East Lansing, Michigan
cpeters@msu.edu

Introduction

In an increasingly competitive and resource-lean environment, the need for greater accountability, through outcome and impact reporting, has never been more important within the Cooperative Extension Service. Program evaluation increases accountability and documents outcomes and impacts of community-based programs. Evaluation of public service programming is becoming increasingly important with the current funding realities. "Organizational leaders recognize the need to build evaluation capacity as a means to improving program evaluation" (Taylor-Powell & Boyd, 2008).

One way to build evaluation capacity is to improve the evaluation skills and competencies of those involved in programming throughout the organization. Within Extension, this requires developed competencies in evaluation among faculty and field educators whose training is in a programmatic discipline and may or may not be in evaluation. As Extension leaders decide the most effective ways to provide training to improve those skills, obtaining an accurate assessment of the current competencies of programming personnel is vital (Taylor-Powell & Boyd, 2008).

However, within a large and complex organization like Extension, which operates at multiple levels throughout the state, additional obstacles and challenges exist (Franz & Townson, 2008; Rennekamp & Engel, 2008). Therefore, perceptions of evaluation competencies throughout the state and an understanding of both the perceived obstacles and constructive suggestions of those involved in programming can facilitate professional development efforts and improve program evaluation.

Evaluation Competencies

Most evaluation work in Extension is done by programming professionals or others whose primary training has often been in content. The need to specify and delineate the competencies required of evaluators has been a challenge in Extension as well as to researchers and professional evaluators (Ghere, King, Stevahn, & Minnema, 2006). Various attempts have been made over the past decades. One early attempt to list competencies for educational research and evaluation identified 25 general research and evaluation tasks and related competencies (Worthen, 1975). Most of the 25 tasks specified specific subsets of skills. Worthen's "competencies" refer to both specific skills and knowledge.

More recently, a taxonomy of evaluator competencies has been put forth and revised (King, Stevahn, Ghere, & Minnema, 2001; Stevahn, King, Ghere, & Minnema, 2005) in the American Journal of Evaluation. These competencies reflect the opinions of experts in the field about the skills and attitudes required of professional evaluators (King et al., 2001; Stevahn et al., 2005). In their revised taxonomy, six domains of evaluation competence were specified: professional practice, systematic inquiry, situational analysis, project management, reflective practice, and interpersonal competence. Within these domains, 61 competencies were defined.

Throughout the United States, Extension and the land-grant universities with which they are affiliated have been a context in which both program theory (for example, see Rockwell & Bennett, 2004) and evaluation methods have been practiced, tested, and developed. Michigan State University Extension (MSU Extension), for example, has delineated "evaluation, applied research, and scholarship" as a core competency required of professional staff. These competencies were based on work by ECOP in 2002 (ECOP, 2002). Research by Arkansas Cooperative Extension Service (Cooper & Graham, 2001) found seven core areas with planning and evaluation vital to the changing roles of extension staff.

This core competency initiative, an ongoing process spread over more than a decade, identified three sub-competencies: (a) designs and implements appropriate data gathering and evaluation procedures to document outcomes and impacts, (b) creates meaningful information from evaluation data to contribute organizational decisions and reports, and (c) contributes to scholarly investigations and demonstrations to support programming. (MSU Extension, 2004)

A taxonomy of evaluation competencies can be used for several purposes (Stevahn et al., 2005). Primarily, it serves as an accepted standard of competencies for professional evaluators in diverse fields and organizations. It can improve training by serving as a reflective, heuristic tool, identifying the scope of evaluation tasks and providing characteristics and skills that are involved in evaluation. Finally, as Stevahn et al. point out (2005), it can advance research on evaluation.

To serve heuristic purposes and provide a standard for professional practice, a taxonomy must be thorough, and competencies must be put in terms general enough to capture the essential competencies that apply in the diverse contexts in which evaluation is conducted. Breadth and thoroughness are also useful in reflective and interactive workshops (Ghere et al., 2006). A taxonomy of evaluation competencies for professional evaluators (Stevahn et al., 2005) provided the framework to study the evaluation competencies at MSU Extension, where evaluation is conducted at multiple levels of the organization by educators, faculty, and specialists. These individuals have primary educational responsibilities other than that of a professional evaluator.

Patterns of self-assessed competencies can further evaluation scholarship by confirming and validating strengths in assessed competencies in a multi-level organization like Extension. Likewise, the perception of weak or limited competencies helps to inform professional development staff of specific foci for professional development. Staff struggle to balance program delivery with professional development for themselves, and professional development includes both subject matter content as well as core competencies. Likewise, Extension as an organization has limited dollars and time resources for professional development. The outcomes of the study will be to direct limited resources for evaluation training to the areas of greatest need.

Current Study

The current study presents a field-tested and reliable questionnaire to measure evaluation competencies of Extension professionals. Survey data provides an assessment of evaluation competencies in three domains (Situational Analysis, Systematic Inquiry, and Project Management) and awareness of needs for proposed professional development opportunities.

Method

Developing the Survey Instrument

The taxonomy of evaluator competencies compiled by Ghere et al. (2006) provided the starting place for developing the instrument used to collect evaluation competency information. The survey was pilot tested at an Extension conference and feedback solicited. As a result of the pilot, the domains of systematic inquiry, situational analysis, and program management were selected as primarily relevant to Extension work. Additionally, more detail and specificity were added to items to clarify meaning for survey participants and to link a competency to concepts and terms taught in current MSU Extension training modules. A few specific items about statistical software expertise and familiarity with the MSU Institutional Review Board process were included to address specific skills that were deemed relevant for MSU Extension.

In designing the survey, several competing concerns were taken into account: first, recognizing the existing evaluation standards and taxonomies of evaluation competencies; second, relating taxonomy terms to Extension work and evaluation conducted by those who are not professional evaluators; third, obtaining enough specific detail to inform professional development opportunities; fourth, streamlining the survey by eliminating tasks not related to work of Extension staff; and fifth, working within time limitations and web survey formatting.

The resulting questionnaire was a Web-based online instrument and contained 48 quantitative items following the stem "I am able to: __". Respondents indicated their perceived evaluation competence level on a six-point Likert-type scale. The bottom two points on the scale were labeled "Novice," the middle two points were labeled "Proficient," and the top two points were labeled "Advanced." Descriptions of capacities in the Novice, Proficient, and Advanced categories were provided along with the directions for filling out this portion of the questionnaire. For each item, respondents were asked to check if they would attend training on this competency. The questionnaire also captured a description of job classification and previous evaluation-related training and educational experience. Finally, there were four open-ended questions:

  1. Briefly describe the type of program evaluation you typically do. What information does that evaluation provide?
  2. If you were given ample resources and assistance, what improvement in evaluation would you most like to see for your program area? What additional information that might be obtained from an evaluation would most benefit you in your work?
  3. What do you see as the biggest obstacles or challenges to upgrading evaluation for your program?
  4. What type of evaluation resources or training would be most helpful in improving or upgrading what you do for program evaluation?

Procedure

An email explaining the study and requesting participation in this online survey was sent out to approximately 500 Extension employees on the MSU Extension list serv. List serv membership includes faculty and specialists on campus, educators, and program associates/assistants/instructors.

Data Analysis

One hundred forty-two people completed the online survey (n = 142). Descriptive statistics were run on the demographics (i.e., gender, years in extension, education, role). Factor Analysis was selected to verify the conceptualization of evaluation competencies for Extension professionals. A principle component analysis for the extraction method with a Promax rotation method was performed in SPSS to see if any discernible patterns aligned with the domains that conceptually organized the taxonomy. The Promax rotation was selected to due to the assumption that the items and competencies are correlated. Reliabilities were run to assess the strength of the scale Cronbach's alpha's and contributions of each of the items in the scale. Finally, means were calculated for the competencies for each of the different roles in the organization.

Results/Findings

Participant Demographics

About two thirds (65%, n = 92) of the survey respondents were women, and one third (35%) were men. Most respondents had Master's degrees (48%), 22% had Ph.D.'s, 23% had Bachelor's degrees, and 4% had high school diplomas. Respondents indicated their role in the organization. Ninety-one of the respondents were educators, 19 were faculty members or specialists, 20 were program associates/assistants/instructors, and 12 were categorized as other. Faculty/specialists worked at Extension for an average of 14.0 years, with a range of 0.5 to 35 years; educators averaged 12.9 years, with a range from 1 to 30 years; and program assistants/associates averaged 7.0 years, with a range of 1 to 24 years. Program assistants or associates collaborate with educators and specialists/faculty in the evaluation process (i.e., collecting data) and were therefore included in the study. Level of education and role in the organization significantly related (p < .00), with the majority of program associates having Bachelor's degrees, majority of educators having Master's degrees, and the majority of faculty/specialists having Ph.D.s.

Factor Analysis

Seven components were extracted. All 48 survey items were correlated with one another—r's ranging from 0.241 to 0.908 (all significant at p < 0.01). Table 1 shows the factor loadings for the first five factors, Eigen values, and the percent of variance accounted for in the analysis. Examination of the Eigen values found two primary factors that collectively accounted for 57% of the variance, while the other five factors together accounted for 15.3% of the variance. The factors seemed to support the Ghere et al.'s taxonomy (2006) but suggested there may be sub-groups within them.

The first and fifth factors tended to be items conceptually identified as competencies in the Systematic Inquiry, second and seventh factors aligned with Project Management, and both the third and fourth factors aligned with the Situational Analysis construct. The sixth factor consisted of the additional questions regarding the Institutional Review Board process that was added by the project. Interpretations of these sub-groups suggested Systematic Inquiry 1 (factor 1) focused on quantitative, while Systematic Inquiry 2 seemed to focus on qualitative and mixed methods. Project Management 1 seemed to focus on specific management tasks of the evaluation process, while Project Management 2 (factor 7 not shown) tended to be more general. Systematic Inquiry 1 tended to focus on program-level analysis, while Systematic Inquiry 2 tended to consist of items that related to analysis of broader community and organizational contexts. These themes provided the starting place for reducing the data to composite factors.

Data Reduction and Reliabilities

The matrix was examined to determine if identified constructs could be reliably expressed with fewer items, favoring keeping the items that loaded primarily on a single factor. Any items that loaded roughly equally across multiple factors or that did not conceptually fit were not retained. Reliabilities for item clusters identified were very strong— Cronbach's alpha's above 0.9. While factors six and seven produced high alpha's with few items, the team decided to drop them to streamline the process. Eight items total were discarded.

Through this process, the following five factors were constructed: a) Systematic Inquiry 1 (14 items, alpha = 0.96); (b) Systematic Inquiry 2 - (9 items, alpha = 0.94); (c) Project Management 1 (6 items, alpha = 0.93); (d) Situational Analysis 1 - program (6 items, alpha = 0.91); (e) Situational Analysis 2 (6 items alpha = 0.93). Results of the factor analysis revealed a revised scale of 41 items representing 5 domains specific to Extension professional competencies in evaluation.

Table 1.
Summary of Exploratory Factor Analysis Results of Evaluation Competencies

Item Factor Loadings Communalities
Systematic Inquiry 1 Program Management Situational Analysis 1 Situational Analysis 2 Systematic Inquiry 2
Evaluate research and research-related reports. .51 .09 .02 .05 .07 .72
Code quantitative survey items numerically. .58 .11 -.32 .10 .34 .83
Combine multiple quantitative items to identify a concept. .56 -.13 -.18 .08 .50 .81
Enter data into a spreadsheet. .83 .17 .07 -.05 -.54 .76
Use statistical software (such as SPSS, SAS or other). .70 .02 .00 -.01 -.23 .84
Run frequencies on quantitative data. .75 -.09 -.09 .04 -.11 .85
Assess reliability of data. .68 -.16 -.02 -.01 .41 .75
Ascertain whether an evaluation measurement is truly assessing the construct of interest (i.e., validity). .61 -.19 .04 -.07 .51 .80
Test for statistically significant differences using an appropriate statistical test (pre and post, independent groups, site differences). .81 -.18 -.02 .00 .19 .79
Interpret statistical findings. .95 -.06 .07 -.08 .02 .86
Interpret evaluation findings. .83 .05 .11 -.04 .09 .85
Communicate evaluation procedures and findings. .75 .32 .09 .11 -.08 .81
Make recommendations based on evaluation results. .60 .36 .22 -.10 .20 .83
Note strengths and weaknesses of the evaluation. .54 .18 .16 -.13 .41 .83
Respond to requests for proposals by writing evaluation section/plan. -.03 .38 -.09 .14 .35 .72
Communicate with stakeholders throughout the evaluation process. -.01 .64 -.24 .23 .21 .72
Develop the budget for an evaluation. -.05 .97 -.18 .14 .02 .88
Justify costs given information needs. .07 .97 .07 .00 -.14 .88
Identify needed resources for evaluation, such as information, expertise, personnel, and/or instruments. -.09 .41 .23 -.05 .23 .74
Supervise others involved in conducting the evaluation. -.15 .48 .13 -.14 .25 .82
Describe a program concisely and clearly. .09 -.13 .79 .35 -.14 .71
Determine the type of evaluation best suited to answer specific questions about the program. .04 -.05 .50 .30 .23 .74
Specify the type(s) of expected program impact (awareness, knowledge, attitudes, skills, aspirations, behaviors, and/or community change). .01 -.01 .94 .08 -.23 .81
Articulate how assumptions of the program design will lead to the desired outcomes (i.e. the series of "if ___, then ___outcome"). .02 -.15 .62 .14 .23 .79
Develop a program logic model to describe the relationships among the programs goals, objectives, activities and expected outcomes. -.18 -.05 .74 -.10 .07 .68
Measure specific increases in knowledge resulting from program participation .11 .03 .43 .01 .20 .66
Articulate the intended use for information obtained from the evaluation. .05 -.00 .28 .36 .28 .74
Analyze the political considerations relevant to the evaluation. -.04 .26 .10 .58 .08 .68
Address conflict that may affect evaluation processes or use of findings. -.02 .30 .08 .53 .29 .83
Respect the uniqueness of the evaluation site and client. -.09 .06 .14 .67 .04 .74
Remain open to input from others. .04 .00 .05 .81 .25 .73
Modify the study as needed. -.08 .02 .08 .57 .26 .73
Use multiple techniques for identifying the interests of relevant stakeholders. -.02 -.02 .36 -.02 .59 .64
Identify culturally appropriate and responsive evaluation approaches and methods. -.04 .02 .18 .34 .42 .70
Train others involved in conducting the evaluation. -.16 .37 .20 -.15 .40 .82

Write formal agreements with others who are involved in conducting the evaluation.

-.05 .49 -.23 -.02 .60 .70
Design measures assessing behavior change impact. .22 .01 .38 -.20 .60 .79
Assess strengths and weaknesses of different methods for data collection such as surveys, focus groups, behavior observations etc. .17 .08 -.01 .02 .84 .85
Obtain relevant data from multiple sources such as census data, health records, case studies, program records, surveys. .11 .07 -.05 .09 .69 .67
Obtain qualitative data using multiple formats-- such as focus groups, interviews, and open-ended survey questions. .08 .16 -.06 .05 .78 .84
Code qualitative data into themes and categories. .39 -.03 -.21 .01 .69 .80
Eigenvalues 24.9 4.6 1.8 1.6 1.4
% variance 35.80 21.20 5.50 3.36 2.50  
Factor Mean 3.24 3.27 3.79 3.65 3.19  
Factor Standard Deviation 1.16 1.17 0.96 1.03 1.12  
Note: Factor loadings appear in bold. A promax rotation was used.

Evaluation Competencies by Role

Using the means of the competencies by role, Figure 1 shows faculty/specialists have the highest level of competency for all areas. While not surprising, it does show that faculty/specialist perceive their strength is in Systematic Inquiry 1, while both educators and program associates perceive their strength to be in Situational Analysis 1. Also not surprising is that education shows a similar pattern with greater competencies as education increases.

Figure 1.
Distribution of Evaluation Competency Means by Role

Distribution of Evaluation Competency Means by Role

Training Needs

Using simple frequency data across respondents for each questionnaire item, the competencies for which the most respondents indicated willingness to attend training were on the following topics:

  • Determine type of evaluation best suited to answer specific questions about the program
  • Code qualitative data into themes and categories
  • Design measures assessing behavior change and impact

Overall, 41% of the respondents indicated willingness to attend training to improve those competencies. Examining willingness to attend training by role, faculty/specialists rarely expressed that they would attend training on a competency if the training were offered, except for training related to using qualitative and public data sources, whereas 50% were interested in training. About 12% would attend training in quantitative systematic inquiry, 9% in Program Management, and none in Situational Analysis domains. On the other hand, for Systematic Inquiry items, more than a third of field educators indicated they would attend training, 37% for quantitative systematic inquiry items, and 35% for qualitative. Thirty-three percent would attend training on program-related situational analysis, 28% on program management, and 27% on Community and Organization related situational analysis.

Qualitative Results

The qualitative data obtained from answers to two of the open-ended questions ("What do you see as the biggest obstacles or challenges to upgrading evaluation for your program?" and  "What type of evaluation resources or training would be most helpful in improving or upgrading what you do for program evaluation?") were analyzed for the purpose of the study.

Several themes relating to training needs were identified:

  • Desire for more training in evaluation related skills,
  • Preferences or recommendations for the format or delivery method of training, and
  • Other resources that would improve evaluation.

Respondents perceived needs for training in specific evaluation skills, including using software. Respondents in all job categories saw a need for "refresher courses" in statistics or research/evaluation design. Training delivery suggestions by educators included workshops, online classes, shadowing an expert, and individual assistance. A "series" of classes or training modules were also recommended.

Job role differences between faculty/specialists and educators were most pronounced in the types of other evaluation resources that were suggested. Faculty-specialists mentioned student labor and skilled staff. Educators often had suggestions that could improve evaluation at existing skill levels and/or save time: templates, curriculum with built-in evaluation, online resources that do the data tabulation. Several respondents requested examples of "good methods" such as well-designed instruments with items that would give them information about program impact. The most frequent suggestion to improve evaluation was mentioned by respondents from all role categories. These suggestions can be summarized as a recommendation for "centralizing" evaluation in some manner. Specifically, comments included several recommendations for an evaluation team, an evaluation assistant, centralized resources, and a "centralized department that does that work for us."

Discussion

Emerging consensus among professionals and scholars in the field of evaluation has produced a taxonomy of evaluation competencies (Ghere et al., 2006). This suggests that competent professional evaluators should be strong in each of these attitudes and skills. However, in large organizations, roles diverge and competencies (what one is able to do) may differ according to what one actually does (roles and regular responsibilities). The study reported here provides empirical evidence validating the taxonomy and demonstrating the validity and reliability of the survey piloted for assessing evaluation competencies in an organization like Extension. It shows the strengths and weaknesses of evaluation competencies for the various roles in the organization. It provides a baseline and method for assessing changes in these competencies over time for the various roles.

As a starting place for making decisions and improving the quality and practice of evaluation, first a baseline of existing competencies is necessary. The study captures that benchmark data. In future years, following the organizational investment of an evaluation specialist position and selected professional development initiatives, MSU will again assess evaluation competencies. Qualitative data uncovers the perceptions and opinions of employees regarding training needs and preferences and other evaluation-related needs, goals, and challenges.

Conceptualizing Competencies

The study also contributes to the scholarly understanding of evaluation by examining how underlying dimensions of evaluation skills found in a sample of professionals with varying evaluation responsibilities correspond to conceptual domains that organize taxonomies of evaluation competencies. Generally, it confirms that project management, systematic inquiry, and situational analysis are not only helpful categories to organize a taxonomy, but also tap into distinct patterns of competencies found in a large, multi-level organization where program development and program evaluation occur across multiple content areas from agriculture to community development to programming for youth and children.

The emergence of sub-categories in the domains reflects realities of academic training and on-the-job experience. The distinction between program-related situational analysis and situational analysis related to issues on the community and organization levels is clearly connected to role distinctions and job responsibilities. Educators, whose daily responsibilities are conducting programs, may have strong evaluation competencies that require articulating and evaluating program theory and evidence supporting their particular program. These items included addressing conflict and analyzing political considerations relevant to the evaluation. The utility of coding situational analysis related to community and organization as distinct from program analysis was also reflected in a few of the comments that highlighted conflicting needs of MSU Extension and community partners.

Limitations

A recognized limitation is the non-random nature of the sample. Employees who chose to take the survey likely differ from those who chose not to take the survey. It is not known how they may differ in motivation, time pressures, or in other ways from non-respondents and if these differences are reflected in the outcomes. Another limitation that must be emphasized is that the questionnaire is a self-assessment measure, not an actual test of competencies. Respondents' perception of themselves may not correspond to more objective measures of their abilities. However, the provided descriptions of the terms "novice," "proficient," and "advanced," and the correspondence of those terms to the numerical response scale should aide accurate self-assessment and consistency across respondents.

Next Steps

The findings reported provide a starting point for understanding how evaluation competencies fit with evaluation as it is practiced within one organization.

Categories of essential competencies include systematic inquiry, situational analysis, and project management. The competency levels by job classification provided a baseline assessment for MSU Extension. Following this survey, MSU Extension committed to an Evaluation Specialist position with the purpose of enhancing the organization's capacity within all job classifications to evaluate educational programs for impact. The survey results provided suggestions regarding needs and preferences around professional development efforts to meet this goal. These findings will inform the professional development agenda with the goal of enhancing competencies and capacity of MSU Extension professionals. As our organization establishes leadership for evaluation in a program evaluation specialist position and continues to develop, offer and conclude various training opportunities, this survey data will serve as a benchmark to a reassessment of competency skills in program evaluation. Finally, other Extension organizations nationally interested in enhancing evaluation competencies may find value in the uptake of the factor survey that has been tested and validated with this extension sample.

References

Cooper, A. W, & Graham, D. L. (2001). Competencies needed to be successful county agents and county supervisors. Journal of Extension [On-line]. 39(1) Article 1RIB3. Available at: https://www.joe.org/joe/2001february/rb3.php

Extension Committee on Organization and Policy (2002). The Extension system: A vision for the 21st century. Retrieved from: http://ww.aplu.org/NetCommunity/Document.Doc?id=152

Franz, N. K., & Townson, L. (2008). The nature of complex organizations: The case of cooperative extension. In M. T. Braverman, M. Engle, M. E. Arnold & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation (Vol. 120, pp. 5-14).

Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006). A professional development unit for reflecting on program evaluator competencies. American Journal of Evaluation, 27(1), 108-123.

King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluatorcompetencies. American Journal of Evaluation, 222, 229-247.

Michican State University Extension Core Competencies (2004). Retrieved from: http://www.msuextension.org/jobs/forms/Core_Compenticies.pdf

Rennekamp, R. A., & Engel, M. (2008). A case study in organizational change: evaluation in Cooperative Extension. In M. T. Braverman, M. Engel, M. E. Arnold & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extensionn. New Directions for Evaluation. (Vol. 120, pp. 15-26).

Rockwell, K., & Bennett, C. (2004). Targeting outcomes of programs: A hierarchy for targeting outcomes and evaluating their achievement. Lincoln: University of Nebraska.

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evalulators. American Journal of Evaluation, 26(1), 43-59.

Taylor-Powell, E., & Boyd, H. H. (2008). Evaluation capacity building in complex organization. In M. T. Braverman, M. Engle, M. E. Arnold & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation. (Vol. 120, pp. 55-69).

Worthen, B. R. (1975). Competencies for educational research and evaluation. Educational Researcher, 4(1), 13-16.