February 2002 // Volume 40 // Number 1 // Feature Articles // 1FEA3

Previous Article Issue Contents Previous Article

Evaluating Extension-Based Water Resource Outreach Programs: Are We Meeting the Challenge?

Abstract
Attention from politicians and agency personnel, concerns over duplication in mission, privatization, and the push for competitive funding serve to increase the demand for evaluation and accountability in Extension education. In winter 1997-98, a survey was conducted with Cooperative State Research Education and Extension Service (CSREES) state water quality coordinators to assess the status of evaluation efforts related to water quality outreach projects. Survey results offer insight into when and how accountability issues are addressed throughout the life of a project.


Robin Shepard
Assistant Professor of Life Sciences Communication, Extension Water Quality Coordinator
University of Wisconsin-Madison
Internet Address: rlshepar@facstaff.wisc.edu


Introduction

The demand for effective and efficient programs has always been part of Extension-based outreach, but the degree of emphasis placed on accountability is at an all-time high. Greater diversity in programs, concerns over duplication of efforts, calls for the use of communication campaign-type strategies by Extension educators, and more reliance on competitive sources of funds are a few reasons driving the need for program evaluation (Van den Ban & Hawkins, 1996). Furthermore, increased pressures from politicians and agency personnel through program reviews and audits, as well as the federal enactment of the Government Performance and Results Act in 1993 are direct examples of new, expanded attention on program impacts.

Beyond the issues of accountability raised by funders and politicians, evaluation must be seen as a fundamental part of being a professional educator (Scarborough, Killough, Johnson, & Farrington, 1997). Outreach program managers and staff must ask the basic questions, "Did we accomplish what we intended?" and "How do we know?" This is the essence of evaluation for today's educator.

Evaluation involves a systematic collection of information about the activities, characteristics, and outcomes of programs, personnel, and products, in order to reduce uncertainties, improve effectiveness, and make decisions with regard to what those programs or products are doing and affecting (Patton, 1982). It compares evidence with previously selected criteria to assess the value of a program, activity, or product.

Evaluation should be thought of as different from impact reporting. Evaluation requires a well-planned strategy for collecting a variety of outcome data and measuring it against the program's intent (Bennett & Rockwell, 1995). Some of the data can be linked in causal ways to the program, and some cannot. Impact reporting focuses on specific program results, and the data may be narrowly linked only to the impacts program stakeholders deem important (Patton, 1997; Bickman, 1985; and Cronbach, 1982).

Ironically, recent trends emphasizing accountability have increased attention on impact reporting, somewhat at the expense of more objective and thorough program evaluation approaches. One reason is that program staff often focus heavily on program implementation until the program or project is over, when they finally turn their attention to impact reporting of project successes (Decker & Yerka, 1990). All too often in outreach education, evaluations are "reactive" in that they are relegated to the last days of a project. Reactive evaluation, in this sense, involves leaving whatever is left in staff time and financial resources to fulfill final reporting obligations and record any impacts that might reflect accomplishments.

Background and Supporting Information

In 1990, then President George Bush recommended a new initiative for enhancing water quality. The President's Water Quality Initiative (WQI) was created as a cooperative effort among the Cooperative State Research Education and Extension Service (CSREES), the Natural Resources Conservation Service (NRCS), and the Farm Service Agency (FSA). The effort was coordinated with related activities of other United States Department of Agriculture (USDA) agencies, the Environmental Protection Agency (EPA), and agencies of the Department of Interior and Commerce.

The WQI stressed integration of projects, data, and information across agency lines, setting the stage for the sharing of program resources and, subsequently, evaluation efforts. The WQI involved Demonstration (DEMO) Projects and Hydrologic Unit Area (HUA) projects, which were designed to encourage producers to adopt specific management practices to protect and/or enhance water quality.

DEMOs and HUAs were initiated between 1989 and 1991. In 1997, these projects began phasing out and were required to conduct final reporting and project evaluation. By 2000, many of the projects had ended, and evaluation efforts associated with final reporting came to the forefront.

To better understand the evaluation practices of these special water resource outreach efforts, a survey of Extension Service water quality coordinators was conducted. These water quality coordinators do not represent overall evaluation efforts by Cooperative Extension, but they were selected to assess the evaluation efforts of special water quality projects they were required to report and evaluate. This is a situation where a specific group of decision makers was expected to give evaluation greater attention than is usual in general extension-based outreach. This assessment was led by the University of Wisconsin in consultation with national program leaders in both CSREES and NRCS.

Objectives of the Study

As the Water Quality Initiative (WQI) projects began reaching their termination dates between 1997 and 2000, the need for program evaluation grew. In order to encourage and support state-level evaluation of DEMO and HUA projects, the University of Wisconsin conducted an assessment of the intentions each state Extension service had toward evaluating their respective projects. This assessment was developed to determine the methodological approaches to evaluation by CSREES state water quality coordinators. The study objectives included:

  1. A description of program evaluation efforts for DEMO and HUA projects.

  2. The identification of barriers to evaluation efforts of DEMO and HUA projects.

  3. The identification of training and professional development needs related to building sustained capacity for conducting evaluation as part of water quality outreach efforts.

Results from this study were used to develop USDA-CSREES and USDA-NRCS guidance pertaining to evaluation and final reporting for DEMO and HUA projects. Subsequent documents were written and distributed nationally in 1998-1999 (Shepard, 1998). Furthermore, the findings from the study have been used, in part, to develop a national evaluation training program by the University of Wisconsin called "Providing Leadership to Program Evaluation." This professional development seminar has been held annually since 1999.

Methodology

A list of potential respondents was generated from a national directory of the 48 CSREES state water quality coordinators. These individuals had some level of administrative responsibility for one or more USDA Demonstration (DEMO) or Hydrologic Unit Area (HUA) projects in their states or territories. State water quality coordinators were also the people most likely to know about evaluation plans and expectations for the projects in their states. They were, in effect, in the administrative position most likely to promote, or even design, the evaluation efforts for the projects.

Wisconsin's state coordinator was eliminated from this list because the principal investigator/author of the study was the lead contact. In addition, three other coordinators were eliminated from the list because their projects were either led exclusively by NRCS (non-Extension Service) or had already shut down and staff had been reassigned. This left a total of 44 as the maximum number of water quality coordinators for this study.

In winter 1997-98, a survey was conducted with the 44 water quality coordinators (i.e., a census of the population of water quality coordinators working with DEMO and HUA projects). The survey was administered using telephone, FAX, and e-mail procedures. An initial telephone call introduced the survey and its purpose to prospective respondents. A screening question gave the coordinator the option of either participating or deferring to a staff member who had been more involved in project evaluation or impact reporting. After agreeing to participate, the respondent was given the choice of receiving the survey questions by e-mail or FAX.

Results are based on 31 responses (a 70% response rate). A single interviewer handled the logistics of the survey (initial telephone screen, survey distribution by FAX or e-mail, and follow-up telephone calls in cases where the surveys were not returned). The interviewer indicated that of the 13 states (30%) where the coordinators chose not to participate, most refusals reflected limited concern for evaluation. In checking for non-response bias, those non-respondents stated they felt they would either address evaluation issues at a later date or were not planning any evaluation efforts.

When a respondent indicated that they were responsible for multiple projects in a state, the respondent was asked to answer the questions pertaining only to the DEMO project. This was the case in approximately 12 states where both an HUA and a DEMO existed. If the respondent had responsibility for more than one HUA or DEMO, he/she was asked to consider the project that was most concerned with evaluation and impact reporting.

The survey involved approximately 35 questions, with the total number per respondent varying depending upon how they followed tiered response categories and/or skip patterns. Questions focused primarily on five areas:

  1. Respondent demographics and their role in project evaluation;

  2. Type of evaluation and procedures used (i.e., formative, summative, reactive; collection of baseline data prior to project implementation; impact-focused reporting);

  3. What data was intended to be collected;

  4. Barriers to conducting evaluation; and

  5. What type of future training and professional development would be appropriate for those responsible for implementing water quality outreach efforts.

Results and Discussion

The analysis of results is based on completed interviews with 31 of 44 state water quality coordinators (i.e., the population of state water quality coordinators or their designee who had responsibility for DEMO or HUA evaluation was 44). Furthermore, because the respondents represent a census, based on one contact per state, statistical tests of such measures as probability estimates are not used.

Most water quality coordinators indicated that a detailed evaluation strategy for their project had not been developed despite being 1 to 2 years from project closure/termination. Only 8 of 31 (26%) respondents said they were able to assess actual change over time in the adoption of best management practices (BMPs). Such results indicate a lack of pre-planning and commitment to collecting pre-project data with the expectation that specific indicators will be tracked.

By default, this leaves most projects with the prospects of conducting reactive forms of evaluation and measurement that are designed to show or even prove certain changes have occurred. A risk in reactive evaluation is that the methods and approach become focused on recording those changes that are most likely to reflect positively on the project. Reactive evaluation can lead to induced bias and a focus on the accomplishments of the project, rather than an objective assessment of what the project intended to do and if it accomplished its goals.

This consequence of post-project, or reactive evaluation is affirmed by the water quality coordinators' concern about the lack of baseline information and true assessment of pre-project conditions (Table 1).

Table 1
What Are the Challenges for DEMO and/or HUA Evaluation?*

Evaluation Challenge

Number

Percent

Lack of general baseline data

5

17

Biophysical data lacking or deemed unlikely to change

12

41

No record of behavior and/or management practice adoption rates at start of project

10

34

Methodological barriers and concerns of approaches

10

34

Staff expertise in evaluation

10

34

Loss of staff, moving to other projects

7

24

geographic size/scale of the project area

5

17

Funding for evaluation

4

14

Motivation: do not see a need for evaluation

4

14

Expectations from federal partners not clear

4

14

Other (statements not attributed to these categories)

10

34

Did not provide a response

2

7

* These responses were based on an open-ended question that asked all respondents to describe the obstacles they either did, or expected to, encounter in the evaluation of their DEMO or HUA project. Respondents were allowed to provide as many obstacles as they felt appropriate. A total of 81 statements were transcribed into single or unique statements. The interviewer grouped the 81 statements into the above categories. The "other" category represents those responses that did not apply to the most common categories represented in the above list above. Responses are based on 29 valid cases (completed surveys), with two missing cases. All respondents gave at least one statement.

Reactive evaluation and a specific approach to impact reporting are often relied upon in the absence of more planned formative evaluations. Formative uses for evaluation (Scriven, 1967) include issues such as audience needs, current knowledge gaps, prevalent behaviors, and information preferences, etc. When assessed prior to a project's start, these issues can be used to influence the design and implementation of the outreach efforts (King & Rollins, 1999; Lanyon, 1994; Mattocks & Steele, 1994). When tracked over time, such measures can show whether changes have occurred. In this way, evaluation becomes an essential component to initial program design and is integrated into the project from the very beginning.

One barrier associated with formative evaluation approaches is deciding what to measure. Water quality projects are by their nature directed at protecting and/or enhancing water quality. This encourages program staff to focus on biophysical changes to the water as an indicator of program success or lack of success. While the overall, or long-term intent, of outreach education may be to protect or enhance water quality, there are other impacts that can be assessed, such as the application of knowledge and skills or the adoption of improved management practices (Rogers, 1995).

Such practices are at the heart of most outreach programs, because staff promote certain actions that research has shown to be beneficial to protecting water quality and/or farm profits. Therefore, both long-term indicators of impact (i.e., physical changes to water quality) and more immediate impacts (i.e., changes in farm management and behavior) were assessed by this study to determine the needed level and type of evaluation support for and from state water quality coordinators.

The study found that only three (10%) of the states actually conducted a formative assessment strategy for their project. This involved collecting pre-project needs and audience characteristics specifically for DEMO or HUA efforts. However, when all coordinators were asked what information they intended to use to determine program impact, there was reliance upon information ranging from biophysical environmental indicators (e.g., sediment loading, biotic indexes, etc.) to behavioral indicators (e.g., awareness, knowledge, and/or adoption of practices, etc.). When a range of potential indicators were assessed for intended use, many states intended to rely on such indicators without any true baseline from which change could be adequately assessed (Table 2).

Table 2.
What Type of Information Will Be Used to Assess Project Impacts Without Pre-Assessment of Baseline Conditions?*

Type of Impact Indicator

Number Expected to Use Indicator

Number (%) Planning to Use Without Baseline

Biophysical indicators

Sediment loading

8

4 (50%)

Biotic/ecological indexes

6

3 (50%)

Structural practices in place

15

6 (40%)

Behavioral indicators

Awareness of management practices

13

5 (38%)

Knowledge of management practices

16

8 (50%)

Actual present use of management practices

18

10 (55%)

Water quality perceptions

16

8 (50%)

Participation in educational events

16

9 (56%)

Agency Measures

Dollars expended

11

6 (55%)

Overall number of activities conducted

18

12 (67%)

News media attention (articles, media releases)

18

15 (83%)

* Respondents were asked to read a listing of potential impact measures and then place a check beside those that the project intended to use without a pre-assessed baseline. Percentages are based on 23 valid cases (completed surveys) with eight missing cases.
Note: Other measures were checked for which the need for pre-project status or baseline data is not essential. Those measures include: the number of participants taking part in programs (80% indicated they would use) and number of cost-share agreements signed (80% indicated they would use).

Building evaluation skills and developing personal confidence to use those skills is critical for educators to answer questions about the effectiveness and efficiency of their programs. It may not be necessary for educators to become evaluation experts; however, they do need a fundamental understanding of methods and ethical standards if they are to make evaluation part of overall program design.

This assessment of state water quality coordinators asked several questions pertaining to the training and professional development needs of project staff. In the majority of states, more training was viewed as beneficial to building internal capacity necessary for making evaluation a more common part of projects and outreach programming (Table 3). Specifically, water quality coordinators felt staff needed:

  • Better understanding of when specific sociologic measurement is appropriate;
  • Knowledge of what type of data can and should be collected; and
  • The skills to choose reliable and appropriate methods for collecting sociological data.

A common concern expressed by water quality coordinators in open-ended responses was that project staff are more likely to have technical and physical science backgrounds (e.g., agronomy, soil science, crop production, etc.) and may not be prepared for or feel comfortable using social science measures (e.g., behavior change, practice adoption, perceptional indices, etc.). Capacity building through training and professional development should consider more than just describing what to evaluate or track (Seevers, Graham, Gamon & Conklin, 1997). In particular, training should address the appropriateness and ethical issues associated with social science data collection through the use of surveys, case study techniques, focus group, and other efforts.

Table 3.
Water Quality Coordinators Indicating They or Project Staff Would Benefit in Evaluation Training.*

Training Topics

Number

Percent

How to evaluate biophysical/agronomic change

19

79

How to evaluate audience/individual change

19

79

How to track participation rates and audience reactions

18

75

* Based on 24 valid cases (completed surveys) with seven missing cases.

Administrative support considerations may also affect evaluation efforts for water resource projects entering their latent stages of activity. An overwhelming concern for water quality coordinators was that as the DEMO and HUA projects reached the end of their federal funding, project staff began leaving or were reassigned to other projects, thus leaving no one to conduct or help in the evaluation efforts. As of 1998, most of the 31 states surveyed indicated they were seeking staff time and funding for evaluation. Seventeen of the states indicated that less than one-half of one staff person's annual work time would be dedicated to evaluation. Twelve of the states expected to spend $15,000 or less on evaluating their DEMO and HUA projects. Of the more committed states, one had planned to dedicate 2.2 annual staff positions to evaluation work, while two states planned to spend nearly all of their final year's project dollars on evaluation.

Conclusions and Implications

Results from this study indicate an overwhelming lack of attention to project evaluation in special water quality outreach efforts. Indeed, the outright refusal of 30% of the states with DEMO or HUA projects to participate in the survey illustrates the low priority often given to evaluation efforts, especially in light of the most common reasons given for that refusal:

  1. Evaluation concerns would be addressed later (despite the project's nearing termination) or

  2. No plans to address evaluation were in place at all.

The survey results point to these main explanations and conclusions.

Despite the best intentions, the approach to DEMO and HUA project evaluation seemed to be primarily reactive, using neither basic evaluation planning nor formative research techniques. Much attention goes into just "doing" outreach, and by the time evaluation is considered, outreach-focused staff and faculty have moved on to the next outreach program. Without early attention to program evaluation as part of program design and implementation, adequate indicators of potential change are not collected from which a later comparison can be made.

Those who conduct and administer water quality outreach programs view evaluation as important; however, barriers to conducting evaluation must be addressed. These barriers include: dedicating time for evaluation beyond that allowed for conducting programs, assigning staff and funding, and recognizing quality evaluation efforts.

There is a need to improve staff skills and capacity to conduct evaluations. The training and professional development most requested includes how to evaluate changes in the biophysical environment, agronomic impacts of water quality practices, and the extent to which farmers adopt water quality practices. Training should give specific attention to social science data collection techniques and methods. This requires more than merely a survey methods course, and should include topics such as: a description of various methods and when to use them, how to ensure credibility and confidence, and ethical issues in evolution research.

Scientific inquiry and the need to better understand why things occur as they do are part of the culture from which the nation's Land-Grant institutions are founded. However, anecdotal comments from the telephone survey strongly suggest that program evaluation is not given the same status as other aspects of the project, such as program implementation or even applied research efforts. Even more problematic is the apparent lack of support for specific approaches such as formative evaluation as part of program planning.

University administrators, program leaders, and even project managers often claim to place a high priority on evaluation, but when it comes to allocating resources and rewarding faculty and staff for quality evaluation work, the commitment is often lacking. There is an administrative hesitation to dedicating staff time and expertise, and especially financial resources, to evaluation. This is supported by the overwhelming absence of baseline information collected prior to, or even in the early stages of, DEMO and HUA projects.

Administrators and project staff should acknowledge and support evaluation in substantial ways. Such acknowledgement should include at a minimum:

  • Recognition of quality efforts in project reviews;

  • Identifying evaluation as a responsibility of project staff and reinforcing the time to conduct such work;

  • Acknowledgement through annual plans of work and individual merit reviews;

  • Allocation of funds specifically for evaluation efforts; and

  • Active identification for training and professional development of staff to increase their skills and capacity for evaluation.

These actions are necessary to establish an organizational culture that recognizes evaluation as part of the educator's job—not merely an add-on task to be done if and when there is time. Without a shift in our support for evaluation, it will always be considered a nuisance requirement at a project's end, done only to justify the existence of the next program or project.

References

Bennett, C., & Rockwell, K. (1995). Targeting outcomes of programs (TOP), an integrated approach to planning and evaluation. (A program planning guide prepared for USDA employees.) Washington, D.C.: Cooperative State Research, Education and Extension Service.

Bickman, L. (1985). Improving established statewide programs: A component theory of evaluation. Evaluation Review, 9(2), 189-208.

Cronbach, L. J. (1982). Designing evaluation of educational and social programs. San Francisco: Jossey-Bass.

Decker, D. J., & Yerka, B. (1990). Organizational philosophy for program evaluation. Journal of Extension [On-line] 28(2). Available at: http://www.joe.org/joe/1990summer/f1.html

King, R., & Rollins, T. (1999). An evaluation of and agricultural innovation: Justification for participatory assistance. Journal of Extension, 37(4).

Lanyon, L. E. (1994). Participatory assistance: An alternative to transfer of technology for promoting change on farms. American Journal of Alternative Agriculture, 9(3), 136-142.

Mattocks, D., & Steele, R. (1994). NGO-government paradigms in agricultural development: A relationship of competition or collaboration? Journal of International Agriculture and Extension Education, 1(1), 54-61.

Patton, M. (1997). Utilization-focused evaluation. Thousand Oaks, California: Sage.

Patton, M. (1982). Practical evaluation. Newbury Park, California: Sage.

Rogers, E.M. (1995). The Diffusion of innovations. (4th ed.) New York, NY: Free Press.

Scarborough, V., Killough, S., Johnson, D., & Farrington, J. (Eds.). (1997). Farmer-led extension. London: Intermediate Technology Publications, Ltd.

Scriven, M. (1967). The methodology of evaluation. In R.W. Tyler, R.M. Gagne, and M. Scriven (Eds.), Perspectives of curriculum evaluation. Chicago, Illinois: Rand McNally.

Seevers, B., Graham, D., Gamon, J., & Conklin, N. (1997). Education through cooperative extension. Albany, NY: Delmar.

Shepard, R. (1998). A guide to project closure and final report planning. Madison, Wisconsin: Cooperative State Research Education and Extension Service and University of Wisconsin-Extension.

Van den Ban, A. W., & Hawkins, H. S. (1996). Agricultural Extension (2nd ed.). Cambridge, Massachusetts: Blackwell Science Ltd.