December 2005 // Volume 43 // Number 6 // Feature Articles // 6FEA3

Previous Article Issue Contents Previous Article

Limited Resources--Growing Needs: Lessons Learned in a Process to Facilitate Program Evaluation

Abstract
Extension educators face the challenge of delivering reliable information as input to the decision-making process of clientele groups. This article draws on a process used to evaluate member perceptions of program effectiveness for a commodity organization where the program beneficiaries are also the funding source. While vast literature covers evaluation procedures and theory, there is little information on practical evaluation examples linked with this theory, a gap this article addresses. We use a recent project with the Michigan Apple Committee (MAC) to illustrate the process and draw attention to critical steps for a successful evaluation.


Cristobal Aguilar
Graduate Research Assistant
aguilar9@msu.edu

Suzanne Thornsbury
Assistant Professor
thornsbu@msu.edu

Department of Agricultural Economics
Michigan State University
East Lansing, Michigan


Introduction

This article draws on a process used to evaluate member perceptions of program effectiveness for a commodity organization where the program beneficiaries are also the funding source. While vast literature covers evaluation procedures and theory, there is little information on practical evaluation examples linked with this theory, a gap this article addresses.

In today's economic climate, public and private entities are critically evaluating expenditures as they search for more efficient and effective allocation of available resources. Program evaluation is often undertaken by the funding source who, as a part of the assessment, wants to see measures of effectiveness in addition to a descriptive report of activities (O'Neill, 1998). Program beneficiaries are also important evaluators and must ultimately register positive impacts in order to justify program continuation. Commodity and/or industry organizations have been traditional partners in Extension programming and frequently serve as both funding source and program beneficiary.

It is not only Extension that must increase evaluation efforts in order to generate systematic and convincing evidence of programming value (Stupp, 2003) but also traditional users of that programming. Like most state Extension services, resources within commodity organizations themselves are becoming more limited, with members demanding increased services and greater accountability from their own boards. The role of Extension educators expands to include facilitating program evaluation and providing measures of accountability for their partners' activities (Decker & Yerka, 1990). Limited resources often require commodity groups to make difficult decisions regarding allocation of resources among programmatic areas of emphasis (e.g., production vs. marketing).

We use a recent project with the Michigan Apple Committee (MAC) to illustrate the process and draw attention to critical steps for a successful evaluation. MAC is an organization of approximately 1,000 apple growers in Michigan, supported through a check-off on commercial apple sales. These grower resources are then used to fund six programmatic areas: Advertising, Merchandising, Export, Public Relations, Industry Services, and Apple Research. The MAC program evaluation provides insight about the process, linkages to group decision-making, and lessons learned through implementation.

Evaluation Process

The evaluation process is straightforward, yet there are critical points that an Extension educator must address to maximize useful feedback for their partners. The general steps in the evaluation process are first outlined below and then discussed in greater detail in the following sections. In the discussion we compare the literature on program evaluation with our own experience to emphasize critical points of the process and highlight lessons learned in application of the concepts.

  1. Maximize support from organization leadership
    • Definition of clientele needs and evaluation purpose
    • Consistent identification of program areas
    • Develop and administer evaluation instrument
  2. Identify specific data needs
    • Definition of population/sample
    • Questionnaire development
  3. Analyze and deliver results
    • Analysis
    • Presentation to organization leadership
    • Presentation to organization membership
    • Follow-up and ties to decision-making

Leadership Support

Explanation

Douglah (1998) highlights the critical importance of a positive attitude towards evaluation from people involved in the process. In our case, the Executive Director and the Chair of the MAC Board initiated the evaluation, motivated, at least in part, by membership demands for Board accountability. Despite the request for assistance and interest by the MAC Board in evaluating its programs, there remained some doubts about self-assessment. The proactive participation of the MAC Executive Director and the Chair of MAC was vital to convince board members of the usefulness of evaluating past performance.

Evaluation does not aim to replace decision makers' experience and judgment but rather offers systematic evidence that can inform further experience and judgment (Alkin, 1990). It was necessary to spend considerable time with the board discussing methodology and the pros and cons of anticipated outputs. It is important to make sure individuals interested in the study results understand that findings will not be a panacea for all possible problems they are facing. Evaluation will serve to guide future decisions by helping identify issues of relevant and current significance.

A clear definition of the evaluation objective is essential in order to identify information needed, as well as to define which instruments will be used (Taylor-Powell, Steele, & Douglah, 1996). For example, in our case the initial purpose of the evaluation process was to reveal member perceptions about past performance and solicit opinions about future directions for MAC programs in order to provide input for subsequent Board decisions. A second, but not necessarily less important, purpose of the MAC Board was articulated during early discussions. The leadership was very interested in demonstrating to their members that individual opinions and beliefs were a valued input to organizational decision-making. Thus, the act of undertaking evaluation was itself an integral part of achieving success.

Reviewing goals is critical to ensure that everybody has a common understanding of each programmatic area and terminology used. For example, while reviewing goals of existing programs (Advertising, Merchandising, Export, Public Relations, Industry Services, and Research), some board members found that even they were unclear about differences between Advertising and Merchandising Programs. The names could be confused, while the programs themselves had very different goals. Thus a very early success of the evaluation process was identification of an immediate opportunity to clarify program area definition and goals among respondents. Consistency in understanding insures a greater degree of accuracy when assessing the programs and thus guarantees more reliable results.

Having leadership that is highly motivated, committed, and influential acting as a catalyst is important to ensure success of an evaluation. Active participation of the MAC Board and leadership was key, both to ensure political support for the initiative and to obtain important survey design input to ensure results addressed the organization's concerns.

Lessons Learned

  • Time spent ensuring solid leadership support and understanding will be greater than anticipated but extremely valuable. Ultimately this support will be critical to project success.

  • There may be significant differences in expectations and understanding among leadership that must be addressed and resolved before undertaking evaluation.

  • Evaluation needs are often multiple and almost always broader than initially expressed.

  • The process itself contributes to a deeper understanding of the importance of evaluation as both a reflective and a learning process.

Evaluation Instrument

Explanation

The literature distinguishes two primary purposes for evaluation: "summative" and "formative" (Scriven, 1967). A summative evaluation serves to document and/or quantify total effectiveness of a program. Such documentation most often takes place once activities of the program have been completed (Douglah, 1987). In comparison, formative evaluation is designed to assess an on-going program with the goal of improving current efforts. Scriven (1991) recommends that the design of any formative process and instrument be broad enough to serve as the basis for future summative evaluation at a project's end. Our assessment of MAC provided formative input, but was also designed for later summative use.

Design of an evaluation instrument depends on assessing project scope subject to resource availability (money and time) (Israel, 1992). Taylor-Powell and collaborators (1996) elaborate on the importance of understanding the social, cultural, and political environment and provide additional considerations for instrument selection. Factors that may influence this decision can be classified into three groups: 1) technical adequacy (e.g., reliability, validity, freedom from bias) 2) practicality (e.g., cost, political consequences, duration, personnel needs) and 3) ethics (e.g., protection of human rights, privacy, legality) (Summerhill & Taylor, 1992).

Scope of the project influences sample size. The MAC made its mailing list available, which included the entire population (1,123 growers). We learned that the list was out-of-date (e.g., some growers listed had already left the business and others had an incorrect address), which would undermine any attempt to select a probabilistic sample. Thus, guided more by limitations than by choice, the entire population was included in our survey.

Such a sampling procedure has been referred to as a "convenience sample" in the literature (Patton, 1990). This non-probabilistic procedure where the individual is self-selected (e.g., respondents) requires careful inference of survey outcomes to the entire program (Cochran, 1963). Since the MAC evaluation involved the direct participation of growers, ethical considerations were important. Primary concerns were voluntary participation, provision of sufficient information about the reach of the study, and ensuring confidentiality of the respondents (Rosey, 1992).

Available resources influence instrument selection. A mail survey offers the advantage of requiring minimum staff to prepare and mail, and low overall cost (Diem, 1999). A resource-constrained environment limits resources for the evaluation, thus factor 2 (practicality) drove our choice of instrument after technical adequacy and ethics were incorporated. In our case, information was ultimately gathered through a written mail survey that was divided into three parts. One section solicited demographic information related to the respondents and their operation, including trends in production and markets. A second part of the survey asked the respondents to evaluate six programs supported by MAC. An additional section was designed to explore apple grower beliefs about resource allocation.

The survey was administered following Dillman's (2000) methodology, which is framed on social exchange theory (Homans, 1973). Such a process, also called Total Design Method (TDM), views response between respondent and evaluator as a social exchange, where voluntary participation is a function of the benefit-cost ratio of taking part in the study. To encourage participation, it is necessary to either reduce costs (e.g., succinct surveys, pre-paid envelops) or increase benefits (e.g., feedbacks, incentives). In our case, encouragement from the MAC Board was an additional way to increase benefits. Steps followed from Dillman's TDM in the implementation of our survey were: a personalized cover letter, a simple and straightforward survey, and a follow-up mailing.

To improve the response rates, two reminder letters were sent; one of them included a new copy of the survey along with another postage-paid reply envelope. These techniques have proven very effective in improving rate of response in earlier applications (Brennan, 1992). Of the total number of surveys mailed, 282 were returned (25% response). Seventeen percent of the 282 respondents indicated that they were no longer growing apples, confirming an observed trend in this sector. This response rate provided enough data to analyze general trends and is considered adequate given the scope of the evaluation.

Lessons Learned

  • When designing an instrument it is important to consider future evaluation needs.

  • Quality of available mailing lists must be accounted for in sample selection.

  • By definition, evaluation of a resource-constrained program implies constrained resources for evaluation.

  • In any evaluation the rate of response may be lower than expected. All opportunities to boost response, particularly those that have proven effective, should be followed.

Results Analysis and Delivery

Explanation

Following needs expressed by the MAC Board, a first step in analysis of survey results was to define demographic variables that could be used to sort and compare responses. The objective was to put growers with common characteristics into the same group, so that they could be contrasted with other groups (e.g., is there any differences in the responses of growers that have less than 30 acres of apples compared to those who have more than 100 acres?).

In the MAC survey, five variables were used to group growers: geographic region (defined by MAC), scale of production, target market (fresh or processed), grower age, and how members graded the MAC programs using an overall evaluation index. Members were asked to evaluate past performance in each of six MAC program areas on a scale from 1 (poor) to 5 (excellent). An overall evaluation index (OEI) was calculated as the simple average of assessment scores across areas.

Two types of questions were included to evaluate grower perceptions of budget allocation. First, members were asked how they would allocate, in percentage terms, future economic resources among five of the six programmatic areas. (The research area was excluded from this question, as it has historically been administered through a separate budgetary process.) A second question was similar, but it asked how members believed MAC had allocated past resources. An important, and unexpected, result was the extremely low capacity of members to identify or even estimate past allocations. This outcome clearly demonstrated a vital gap in information and understanding among growers and identified an important educational opportunity for board members to consider in the future.

Several significant differences were found that provided important information to industry leadership, as illustrated by the following example. Growers were asked how they would allocate economic resources for support of fresh versus processed markets.

On average, growers who target the fresh market believe that for each dollar allocated to processed markets, $1.70 should be destined to fresh markets. Not surprisingly, growers who target processed markets would allocate the resources between the two categories in a ratio of almost 1 to 1. Further information was elicited through a series of comparisons among member categories. Growers who target fresh markets farm 80% more acres on average than those who target processed markets. Cross-tabs showed the West Central region is highly targeted to the processed market compared to the fresh market focus of Southwest Michigan.

At first glance, numeric differences among all groups vary in absolute terms; however, given that these are average responses for certain variables (e.g., acreage), a closer comparison must be made. Armstrong and Overton (1997) present a series of alternatives to estimate non-response bias in mail surveys that may be considered if the evaluation makes inferences to the whole population. The number of growers in each group and the heterogeneity of the responses within the group will determine confidence intervals. A mean comparison test (Tukey, t-test or paired sample test) was used to test for statistical significance. Findings were reported as statistically significant if the result obtained would have occurred by chance no more than 5 times out of 100.

After all responses were processed and analyzed, a comprehensive report was prepared that incorporated an extensive degree of detail and disaggregated information (e.g., by age, size of farm, target market, geographic region, etc.). This comprehensive report was distributed prior at a meeting with the MAC Board, where the main findings were then discussed, as well as a careful explanation of how to interpret the figures in the report.

A summary of survey highlights was widely distributed to the general membership and other interested parties. Targeted reports were compiled and distributed for some particular groups such as the MAC research committee (Aguilar & Thornsbury, 2003a; 2003b). Numerous follow-up presentations were made with specific groups such as the research committee or to the board as additional questions arose.

The MAC Board and membership have already begun to use the information collected in their decision-making. One example relates to the administration of research funding. In the spring 2004 grower newsletter, the Michigan Apple Committee Chair announced a slight shift in direction, moving from what had been a complete production-related research focus to one that includes value-added apple projects as well as consumer and market research. The newsletter cited the survey as evidence that the membership would support such a shift.

Lessons Learned

  • Results that may at first appear unlikely (i.e., extremely low response rate on beliefs about past budget allocations) often point to significant needs.

  • A clear distinction between numerical differences and statistically significant differences must be made when results are presented. This is often not easy to explain and so must be carefully prepared in non-technical terms.

  • Results must often be presented numerous times and/or in small sections. The value of the evaluation is often in the details, which are not easy to absorb at one time.

  • Action on evaluation results will be more likely to occur slowly and in phases. Be patient.

Conclusions

Extension programs have always faced the challenge of delivering reliable information as a valued input to the decision-making process of clientele groups. This role continues to expand, a notable function being the need to help clientele groups conduct their own internal program assessments. This article draws on a process used successfully at Michigan State University to evaluate member perceptions of program effectiveness for a commodity organization where the program beneficiaries are also the funding source and to link a practical evaluation example with theory. Based on our experience, key factors that should be considered include the following.

  • The proactive participation and empowerment of group leaders are required to guarantee that research tools used will gather information that is needed to make decisions. It is important to gather political support before beginning evaluation and to provide technical support during implementation.

  • All the participants (evaluators and respondents) in the evaluation must be knowledgeable about issues to be assessed and work from a consistent definition of programmatic areas. A common understanding of the issues at the beginning of the process is important to meet output expectations.

  • Because a quantitative evaluation generates results whose relevance will vary depending on their degree of statistical significance, the process and careful interpretation of results must be fully explained. The people who will continue to use these figures as an input to decision-making need a clear understanding of how to interpret results.

  • The assessment process not only shows trends and efficacy and/or efficiency of programs, but it is also a tool to demonstrate concern to program beneficiaries about quality of services provided.

  • There are important additional benefits of an evaluation process that can be achieved such as identification of educational opportunities for commodity producers or organization board members.

References

Aguilar, C., & Thornsbury, S. (2003a). Michigan apple committee winter 2003 grower survey: Summary of results. Agricultural Economics Report 618, Michigan State University.

Aguilar, C., & Thornsbury, S. (2003b). Michigan apple committee winter 2003 grower survey: Research programs. Agricultural Economics Report 619, Michigan State University.

Alkin, M. C. (1990). Debates on evaluation. Thousand Oaks, CA: Sage.

Armstrong, J. S., & Overton, T. S. (1977). Estimating nonresponse bias in mail surveys. Journal of Marketing Research, 14, 396-402.

Brennan, M. (1992). Techniques for improving mail survey response rates. Marketing Bulletin, 3, 24-37, Article 4.

Cochran, W. G. (1963). Sampling techniques. Second Edition. Wiley and Sons, New York.

Decker, D. J., & Yerka, B. L. (1990). Organizational philosophy for program evaluation. Journal of Extension [On-line], 28(2). Available at: http://www.joe.org/joe/1990summer/f1.html

Diem, K. (1999). Choosing appropriate research methods to evaluate educational programs. Rutgers Cooperative Extension Fact Sheet #FS943. New Brunswick, NJ

Dillman, D. (2000). Mail and Internet surveys: The tailored design method. Second Edition. New York: John Wiley & Sons.

Douglah, M. (1998). Developing a concept of Extension program evaluation. Program Development and Evaluation, Series G3658-7. University of Wisconsin-Extension.

Homans, G. C. (1974). Social behavior. New York: Harcourt, Brace, Jovanovich.

Israel, G. D. (1992). Sampling the evidence of Extension program impact. Fact Sheet PEOD-5, Institute of Food and Agricultural Sciences, University of Florida.

O'Neill, B. (1998). Money talks: Documenting the economic impact of extension personal finance programs. Journal of Extension [On-line], 36(5). Available at: http://www.joe.org/joe/1998october/a2.html

Patton, M. Q. (1990). Qualitative evaluation and research methods. Newbury Park, CA: Sage Publications.

Rosey, D. (1992). Program evaluation. Chicago: Nelson-Hall Publisher.

Scriven, M. (1967). The methodology of evaluation. In Stake R.E. (Ed.), Curriculum evaluation. American Educational Research Association Monograph Series on Evaluation, No. 1. Chicago: Rand McNally.

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage Publications.

Stupp, R. (2003). Program evaluation: Use it to demonstrate value to potential clients. Journal of Extension [On-line], 41(4). Available at: http://www.joe.org/joe/2003august/comm1.shtml

Summerhill, W. R., & Taylor, C. L. (1992). Selecting a data collection technique. Circular PE-21, Program Evaluation and Organizational Development, Florida Cooperative Extension Service.

Taylor-Powell, E., Steele, S., & Douglah, M. (1996). Planning a program evaluation. Program Development and Evaluation, Series G3658-1. University of Wisconsin-Extension.