The Journal of Extension -

February 2012 // Volume 50 // Number 1 // Feature // v50-1a3

The Impact of the Government Performance and Results Act (GPRA) on Two State Cooperative Extension Systems

The research reported here examined the impact of the Government Performance and Results Act on accountability and evaluation activities in two state Cooperative Extension Systems. Accountability was examined using five dimensions from Koppell's (2005) framework. Findings indicated both Extension systems transferred accountability activities to county-level educators through increased reporting expectations. There was not a strong connection between GPRA and changes in program evaluation practice or understanding in either state. Clear definitions of accountability within Extension and close examination of the role of evaluation may enhance accountability efforts and result in not only using evaluation for accountability but also for organizational learning.

Sarah Baughman
4-H Youth Development Extension Expert
Virginia Tech
Blacksburg, Virginia

Heather H. Boyd
Research Development Program Director
University of Notre Dame
Notre Dame, Indiana

Kathleen D. Kelsey
Professor and Evaluator
Department of Agricultural Education, Communications and Leadership
Oklahoma State University
Stillwater, Oklahoma


The Government Performance and Results Act (GPRA) was passed in 1993 to improve the effectiveness of federal programs (Office of Management and Budget, 1993). GPRA is one of several accountability efforts that set mandates for Extension (Bennett, 1996). The Agricultural Research, Extension and Education Reform Act of 1998 (AREERA) required approved plans of work (POW) in order for Extension and agricultural research to receive funds from the federal government. The Office of Management and Budget's 2004 Program Assessment Rating Tool (PART) formally reviews agricultural programs. Federal-level accountability efforts have the potential to inform and influence Extension, its programs and its evaluation activities; however, little research has been conducted to measure the impacts of these mandates on accountability.

Conceptualizing accountability and its dimensions is complex. The term "transparency" often appears in conjunction with the term "accountability." Fox (2007) examined the relationship between accountability and transparency and proposed that transparency can be either clear or opaque, while accountability can be either soft or hard (Table 1). For example, stakeholder feedback is considered soft accountability, while cost-benefit analyses is considered hard accountability.

Table 1.
Relationship Between Transparency and Accountability

Clear vs. opaqueSoft vs. hard

Koppell (2005) elaborated on the relationship between transparency and accountability. He defined transparency as a foundational dimension of accountability and as the "idea that an accountable bureaucrat and organization must explain or account for its actions" (p. 96). The key determinate of transparency is answered by the question, "Did the organization reveal the facts of its performance?" Cooperative Extension Services reporting systems that provide information relative to program performance could help address the issue of transparency. A second foundational dimension of accountability is liability, defined as an organization or individual being "held liable for their actions, punished for malfeasance, and rewarded for success" (p. 96). Being liable as stewards of public monies takes the form of budget and program cuts. The liability dimension answers the question, "Did the organization face consequences for its performance?" (p. 96).

The remaining three dimensions of accountability in Koppell's typology are controllability, responsibility, and responsiveness. The construct of controllability asks, "Did the organization do what its principal commanded" (p.97). Responsibility refers to the laws, rules, and norms governing an organization and asks, "Did the organization follow the rules" (p. 98). Accountability is externally focused in the dimension of responsiveness. Responsiveness would focus on meeting the needs of constituents or stakeholders and answers the question, "Did the organization fulfill the substantive expectation (demand/need)?" (p. 98).

The purpose of the research reported here was to examine the impact of GPRA on two states Cooperative Extension Systems reporting and evaluation procedures using Koppell's (2005) five dimensions of accountability. Specific research questions were: 1) Have Cooperative Extension evaluation expectations changed since GPRA was implemented? 2) How does Cooperative Extension define accountability in program evaluation? 3) What policies has Cooperative Extension put into place to increase the accountability of public funds since GPRA? 4) What are the impacts of the policies put in place?


The researchers used a purposive sample from two southern land-grant university Cooperative Extension organizations. The two universities had similar bureaucratic structures in Extension. Key personnel in both systems were identified using snowball sampling techniques (Patton, 1990). In order to be considered for the research project, participants needed to be employed for several years in Cooperative Extension and have experiences related to program evaluation and federal reporting systems.

Participants were solicited by email, followed by a telephone interview. Five participants were interviewed in person, and five were interviewed by telephone using a semi-structured interview protocol (N=10). Of the 10 participants, six were from university A, and four were from university B. All interviews were conducted by the same researcher to ensure consistency.

The interview protocol was checked for face, content, and construct validity by a panel of experts, including three evaluation specialists from separate Extension organizations. Validity was also enhanced by conducting a pilot interview with a qualified expert. The pilot interview was not included in the final data analysis. Interviews ranged from 20 to 60 minutes in length and were recorded transcribed verbatim.

The co-authors coded transcripts line-by-line for themes related to the research questions, coding a number of overlapping cases to enhance internal validity. Codes were organized into themes by the co-authors using Atlas.ti® and analyzed in response to the guiding questions (Seidman, 2006). Credibility was addressed through member checking, prolonged engagement, and peer debriefing (Guba & Lincoln, 1989). The researchers also used peer review/debriefing, member checking, and bracketing as a means of clarifying researcher bias and prolonged engagement (Creswell & Miller, 2000). An audit trail that included field notes was maintained by the primary researcher to improve dependability.


The Participants

Participants averaged 21 years of experience in Extension, with a range of four years to 31years. Participant job responsibilities all related to program evaluation, primarily in administrative roles ranging from mid- to high-level administrators.

How Have Evaluation Expectations Changes as a Result of GPRA?

The first research question examined how evaluation expectations have changed since the passage of GPRA. National Institute of Food and Agriculture (NIFA) sets expectations and provides policy direction; however, each Extension service implements according to its interpretation of policy. Findings indicated both Extension services held expectations for evaluation and reporting at the county level, where county educators were primarily responsible for conducting and reporting evaluation results to state-level administration.

Expectations for evaluation were not explicitly articulated beyond reporting requirements for county educators. However, county educators and specialists at both universities were expected to report impacts and outcomes to administration at the state level. Information from the county was compiled and aggregated into a report that described the results and impacts of programming and sent to NIFA.

State-level accountability was manifested as personal accountability for county educators at both universities. The aggregation of county-level data resulted in a state-level report. Both states placed evaluation responsibilities on the county educators.

The purpose of county-educator driven evaluation was to provide information to the state-level system that was compiled into a report to satisfy GPRA reporting requirements. The emphasis was placed on fulfilling state-level reporting requirements.

Findings did not support a strong connection between GPRA and changes in program evaluation practice or understanding in either system. Several other mediating factors were mentioned as being associated with changes in program evaluation expectations, including increased pressure from state-level government funders, evaluation requirements from external granting agencies, and a shift in employee classification from staff positions to faculty positions for county educators in one state. Participants from both states noted that new faculty and specialists were hired who had an understanding of evaluation importance and methods.

How Does Cooperative Extension Define Accountability?

The second research question addressed how Cooperative Extension defined accountability. Neither system used an explicit definition of accountability; however, participants from both states implied through their practices that evaluation activity was equivalent to accountability and that conducting evaluation activity and reporting results substantially satisfied the requirements for accountability.

One administrator demonstrated the system view of accountability as, "we have to show enough so that people can believe we made a difference. That is how I define accountability." Another administrator said, "We are always accountable to stakeholders and feds and everybody else, so we have to report our progress and activities to those folks as well." The emphasis was on showing accountability, or results and impacts, to stakeholders. Evaluation was seen as summative, for an external audience, a duty to be performed to satisfy an external reporting requirement, and not necessarily for organizational learning (Preskill & Torres, 1998). Organizational learning was not mentioned in the interview data as an expectation of evaluation.

What New Policies Resulted from GPRA?

The third research question asked if new policies had been put in place as a result of GPRA. Practices and expectations had changed since the passage of GPRA, but only one new formal policy was identified by the participants. In order to improve stewardship of university funding, university A implemented a new fiscal policy for county-based offices. University A also required county educators and specialists to report impacts in their annual faculty reports. University B asked county educators and specialists to join state-level planning teams whose products included evaluation components and logic models. Tacit expectations for field and faculty level evaluation had changed, but new policies had not been put in place to increase accountability beyond usage and expenditure of public funds at university A. Both universities used different approaches to address the same mandate.

Implementing stricter financial controls and focusing more on performance issues as reported in faculty documentation demonstrate the issue of "accountability bias." An "accountability bias" occurs when those who hold others accountable concentrate more on finances and fairness than on performance, because focusing on finances can take less time and effort to uncover and document mistakes than can revealing the facts of performance (Behn, 2001).

Other efforts had been made in both universities to increase visibility of program evaluation. University A hired an evaluation specialist with an emphasis on building evaluation capacity and university B implemented a system of internal funding that required submission of logic models as a means of increasing internal accountability.

What Was the Overall Impact of GPRA?

Research question four addressed the impact of policies that had been put into place. Expectations for evaluation work had changed in the time frame examined. County educators were asked to increase the amount and level of their evaluation efforts. Interviewees did not discuss how additional evaluation and reporting responsibilities were offset with a reduction in other responsibilities. According to the participants, neither university addressed agency-level accountability or evaluation beyond reporting the impacts of educational programming efforts via NIFA's plan of work requirement.


There is evidence from the interviews regarding four of the five dimensions of accountability (transparency, controllability, responsibility, and responsiveness). Evidence for the fifth dimension (liability) could not be confirmed. Both universities emphasized the importance of stakeholder involvement in the program planning and evaluation process and had systems and processes in place to involve stakeholders.

This can be interpreted as being responsive to the needs of local and state communities. In addition, participants completed federal reporting process and secured federal funds to maintain programs. The primary expectation from the federal partner for accountability was manifested as completing annual reports based on defined goals. Meeting reporting requirements represents the controllability dimension of accountability. From the information provided by the participants, both universities met the requirements for reporting mandated by NIFA for accountability. Each state determined how to implement reporting approaches uniquely.

One participant discussed the only explicit policy change revealed in the pool of interviews, which related to improved fiscal stewardship. If one were to examine the responsibility dimension, whether the organization followed the rules, this information might indicate that the organization had not been following the rules but was now following the rules. As Koppell (2005) described, fidelity to law is a "most straightforward" measure of responsibility.

Soft accountability in the form of complying with reporting requirements and basic activities necessary to describe the activities of the organization often passes for accountability; however, these activities do little to inform work or change routines. The interviews offered evidence of transparency. Each person who was asked for an interview responded and complied, indicating their willingness to speak with the researchers regarding accountability, evaluation processes, and products.

Participants were able to articulate general processes of reporting; however, changes in accountability practice in association with GPRA were primarily the responsibility of county educators, not administrators or evaluation specialists. County educators were required to submit evaluation reports to supervisors, who completed university- and state-level reports that were summarized in a report sent to the federal funder.

The interviews yielded little evidence of the liability dimension of accountability. However, judging whether the organizations faced consequences for performance in this context could be very difficult. Organizational performance expectations for reporting were not mentioned as problematic or unmet by the participants. In addition, determining that a poor review of the content of a federal report is associated with decreased funding of an Extension organization would have to be explicitly stated by those who were decision makers. There was no evidence that these individual Extension services were punished or rewarded for organizational performance by NIFA. If accountability is "an inherently participatory concept" and a "discursive condition," (Dowdle, 2006), there is little data here that speaks to impressions regarding the ongoing conversation between organizations and their primary federal funding agency.

Implications for Evaluation Practice

Tracking organizational changes in association with performance management and accountability initiatives is a challenging endeavor. Relative to evaluation, struggling with stakeholder involvement, program evaluation, and performance management: How should public organizations be judged (Koppell, 2005)? The research reported here examined five dimensions of accountability relative to demonstrating accountability to a federal funder through a formal reporting process. Because the two universities were predicated on basing educational programs on local needs and local input, one could study accountability to local stakeholders.

Organizations would benefit from an intentional definition of accountability and how their products and services can be thought of as accountable. Organization-wide conversations about what accountability is at each level of an organization, local, state, and federal, would help clarify roles and expectations. Having indicators of accountability within and between systems may help to clarify relationships and organizations to perform to specific expectations.

In the research reported here, two Extension systems had addressed issues of stakeholder accountability (soft) and to a small extent, cost benefit accountability (hard). Over time, results reporting had evolved from communicating outputs such as the number of participants or workshops held to reporting outcomes of broad education programs. In some ways, this movement "shifts the standard of accountability to performance" (Koppell, 2005, p. 99). The shift to performance or outcomes and impacts marks much of the evaluation and reporting activity of Extension services organizations nationwide over the past several years (Rennenkamp & Engle, 2008).

While the expectation of federal reporting has been used by evaluators as a lever to investigate other evaluation issues (Taylor-Powell & Boyd, 2008), evaluators could benefit from reminding themselves that the purpose of the federal accountability system is not only to inform organizational learning but to demonstrate results of programs. Koppell stated organizations trying to meet conflicting accountability expectations are likely to be dysfunctional. Trying to satisfy many different information needs under the umbrella of program evaluation may not be effective for any of the intended purposes of these efforts. Patton (2008, p.111) reminded the evaluation community that accountability, program improvement, and learning are distinct purposes that require "different data and creating contrasting challenges" among those involved in evaluation.

A more expansive view of both accountability and evaluation might aid complex organizations nationwide. A holistic evaluation system as encouraged by Rennekamp and Engle (2008) and Radhakrishna and Martin (1999) would encourage capacity building in each Extension organization. Holistic evaluation would include formal and non-formal evaluations at all levels and incorporate meta-evaluation of the organization. Accountability would then include improvement of programs and practice, in addition to impact writing.


Behn, R. (2001). Rethinking democratic accountability. Washington, D.C.: The Brookings Institution Press.

Benjamin, L. (2008). Evaluator's role in accountability relationships: measurement technician, capacity builder or risk manager? Evaluation, 14(3), 323-343.

Bennett, C. (1996). New national program information system for Cooperative Extension: Lessons from experience. Journal of Extension [On-line], 34(1) Article 1FEA1. Available at:

Creswell, J., & Miller, D. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39(3), 124-130.

Dowdle, M. (2006). Public accountability: Designs, dilemmas and experiences. New York: Cambridge University Press.

Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in Practice, 17(4-5), 663-671.

Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage.

Koppell, J. G. (2005). Pathologies of accountability: ICANN and the challenge of "multiple accountabilities disorder." Public Administration Review, 65(1), 94-108.

Koppell, J. G. (2006). Reform in lieu of change: Tastes great, less filling. Public Administration Review, January/February: 20-23.

Office of Management and Budget (1993). Government Performance and Results Act of 1993. Retrieved from:

Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park: Sage.

Patton, M.Q. (2008). Sup wit eval ext? In M. Braverman, M. Engle, M. E. Arnold & R. A. Rennekamp (Eds). Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 101-116.

Preskill, H., & Torres, T. T. (1998). Evaluative inquiry for learning in organizations. Thousand Oaks: Sage Publications.

Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability training needs of Extension agents. Journal of Extension [On-line], 37(3) Article 3RIB1. Available at:

Rennekamp, R., & Engle, M., (2008). Evaluation in Cooperative Extension: A case study in organizational change. In M. Braverman, M. Engle, M. E. Arnold & R. A. Rennekamp (Eds). Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 15-26.

Seidman, I. (2006). Interviewing as qualitative research: A guide for researchers in education and the social sciences (3rd ed). New York: Teachers College Press.

Taylor-Powell, E. & Boyd, H. H. (2008). Evaluation capacity building in complex organizations. In M. Braverman, M. Engle, M. E. Arnold & R. A. Rennekamp (Eds). Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 55-70.