February 2003 // Volume 41 // Number 1 // Tools of the Trade // 1TOT1

Previous Article Issue Contents Previous Article

Public Issues Education Projects: Meeting the Evaluation Challenges

Abstract
Evaluating and monitoring routine Extension projects is hard enough. Programs that feature concepts like public issues education (PIE), public dialogue, or civic engagement are even more challenging. Familiar "rules and regs" of good evaluation still apply, but hints, warnings, and new resources can help. After 10 years of Extension PIE initiatives, there are also examples of evaluations of PIE that can guide planning.


Nancy Grudens-Schuck
Assistant Professor
Department of Agricultural Education and Studies
Iowa State University
Internet Address: ngs@iastate.edu


Introduction

Extension Public Issues Education (PIE) projects tackle the grittiest, most contentious issues faced by communities and citizens (Patton and Blaine, 2001). Projects may address:

  • Economic and environmental impacts of large-scale livestock operations,
  • Food security,
  • Eroding government support for families and children,
  • School violence, or
  • Cultural and ethnic conflicts.

Solving difficult problems on a practical level is a goal of PIE projects. Results are also expected to affect public policy at the local, state, or national level or to increase a community's capacity to meet the needs of its citizens in complex social and policy environments. Extension-related PIE projects can be national as well as local, such as the National Issues Forums (Arnone, 1999) or the W. K. Kellogg Foundation/Farm Foundation partnership in the 1980s (Hahn, Greene, & Waterman, 1994).

Challenges

PIE projects are admittedly difficult to evaluate. Challenges associated with evaluating Public Issue Education projects stem from:

  • The long-term, broad nature of the goals of such projects.
  • Conflicts associated with differences among stakeholders.

Parallel with both of these challenges is the complex, systemic nature of the changes that are needed. However, evaluation is arguably even more important for projects involving public issues than for routine programming. The high profile of public issues, the wide range of emotions associated with them, and the tendency for PIE programs to have diverse partners require that Extension be savvy about administering such programs. One of the ways to be savvy as a program planner is to prepare honest, useful evaluation reports based on high-quality data.

Solutions

There is a good argument for assigning the responsibility for evaluation of PIE projects to independent professional evaluators or to campus-based faculty or staff. They may be more likely than local Extension educators to possess the skills and resources. However, such resources are often unavailable. There are also advantages to controlling the evaluation at the project level (Earl, Carden, & Smutylo, 2001; Rockwell, Jha, Williams, & Thayer, 2000).

Decide What Your Program Is About

Ideas for program activities often precede goal setting. The first step for designing a responsible evaluation for a PIE program may be to decide what the program is really about. The goals of PIE programming can seem all encompassing. It may, in fact, be true that the issue itself is enormous. However, the programming is probably more bounded.

For example, is a program designed to bring farmers and non-farming rural residents together to discuss farmland protection about:

  1. Helping non-farmers to understand farmers' issues?
  2. Obtaining a diversity of ideas for protecting farmland?
  3. Reducing conflict by helping farmers to become less fearful of rural residents? Or
  4. Developing more productive ways of discussing difficult issues among farmers and non-farmers?

If the answer is: "all of the above," then there is a failure to prioritize, and evaluation will be difficult.

Select the Best Methods for Your Purpose

There is no single best approach to collecting data. It is common--but not mandatory--for projects to collect "before and after" data on key indicators, called "pre-post" (Rockwell et al., 2000). Data could be collected, for example, about the demonstrated ability (behaviors) of members of groups to work well together before and after dialogue sessions (Taylor-Powell, Rossing, & Geran, 1998). On the other hand, economic data or information about community assets may be important (Flora et al., 1999).

Case study approaches have also been applied successfully to evaluation of PIE projects, even on a large scale (Hahn, Greene, & Waterman, 1994). Case study approaches combine methods, such as interviews, document review, and observation. Comprehensive approaches like outcome mapping use evaluation processes that involve a range of stakeholders in planning the evaluation (Earl, Carden, & Smutylo, 2001). It is also worthwhile to collect other projects' evaluation reports. You can show parts of the reports to clients, funders, or stakeholders. Elicit their reactions. Ask: "Would this type of data communicate the value of our project?" Concrete examples often work better than asking for opinions about abstract terms like "case study."

Determine Outcomes, Specify Indicators

Determine a small number of specific outcomes. Develop indicators that match.

In the earlier example, if No. 1 (helping non-farmers to understand farmers' issues) was the goal, then an outcome-indicator match might be greater knowledge of key farming practices by rural residents on a survey that is sent 2 months after the session. The data could be compared to the same survey completed prior to the first session. This method would be "pre-post."

If No. 2 (obtaining a diversity of ideas for protecting farmland) was the goal, then the outcome-indicator match might be the number of distinct ideas recorded at dialogue sessions compared with ideas in public records or the media using observational data and document review.

If No. 3 (reducing conflict by helping farmers to become less fearful of rural residents) was the goal, farmers who attended sessions could be interviewed about the ways in which fears related to actions of rural residents may have changed as a result of their participation.

Tie It All Together

The final step requires telling how the "small stuff" leads to the "big stuff." This is called the program logic or program theory (see web links associated with Taylor-Powell et al., 1998). Devote a separate paragraph to the program theory--make it stand out. The explanation should focus on cause-and-effect.

Consider goal No. 4 (developing more productive ways of discussing difficult issues among farmers and non-farmers) from the earlier program example. Below is a way to phrase the explanation of how a small program could lead to really big changes. The evaluation data will tell what actually occurred, but there should be no doubt that the program--in theory--had the potential to deliver big. Note text in italics.

"Both acrimonious debate and avoidance make development of good public policy more difficult. A productive civil society thrives when dialogue is deliberative and ongoing, leading to sustainable public policies. The [example program] will provide opportunities for non-farming residents and farmers to develop skills that are a first step toward community-wide democratic dialogue, thereby making a modest but direct contribution to solving the current problem of unproductive decision making. The program is anticipated to be successful because it will model conditions for dialogue that are strongly associated with democratic civil society."

Conclusion

Programs with important, long-range social goals, such as Public Issues Education, can be evaluated if methods are tailored to the special challenges of PIE. As with other types of evaluation, it is important to develop clear outcomes and good indicators. It is perhaps more critical that evaluations of PIE projects explain clearly how program activities will advance complex, abstract goals, such as civil society or democratic deliberation.

There are new resources available to assist with evaluation of these types of programs. "See more" in the References section below.

References

Arnone, E.J. (1999). (Ed.). What citizens can do: A public way to act. Dayton, OH: Kettering Foundation. See more at http://www.kettering.org/

Earl, S., Carden, F., & Smutylo, T. (2001). Outcome mapping: Building learning and reflection into development programs. Ottawa, Canada: International Development Research Centre. Ottawa, Canada. See more at http://www.idrc.ca/evaluation/

Hahn, A. J., Greene, J. C., & Waterman, C. (1994). Educating about public issues: Lessons from eleven innovative public policy education projects. Ithaca, NY: Cornell Cooperative Extension.

Flora, C.B., Kinsley, M., Luther, V., Wall, M., Odell, S., Ratner, S., & Topolsky, J. (1999). Measuring community success and sustainability. (RRD 180). Ames, IA: North Central Regional Center for Rural Development. Available at: http://www.ag.iastate.edu/centers/rdev/pubs/contents/180.htm

Patton, D.B., & Blaine, T. W. (2001). Public Issues Education: Exploring Extension's role. Journal of Extension [On-line], 39(4). Available at: http://www.joe.org/joe/2001august/a2.html

Putnam, R. D. (1995). Bowling alone: America's declining social capital. Journal of Democracy, 6(1). See more at http://www.bowlingalone.com/

Rockwell, S.K., Jha, L.R., Williams, S.N., & Thayer, C.E. (2000, November). Using success markers for programming in extension education. Paper presented at the 2000 Annual Meeting of the American Evaluation Association, Honolulu, Hawaii. Available at: http://danr.ucop.edu/eee-aea/using_success_markers.htm

Taylor-Powell, E., Rossing, B., & Geran, J. (1998). Evaluating collaboratives: Reaching the potential. Madison, WI: University of Wisconsin-Madison Cooperative Extension. At: http://www1.uwex.edu/ces/pubs/pdf/G3658_8.PDF. See more at http://www.uwex.edu/ces/pdande/evaluation/evaldocs.html