Spring 1991 // Volume 29 // Number 1 // Feature Articles // 1FEA1
Quality Means Results
Abstract
The purpose of the Minnesota program quality project was to clarify to ourselves and others what we mean by program quality. The program quality indicators clarify our mission by focusing priority setting, resource commitment, and program implementation actions on outcomes. In other words, quality means results.
What is program quality? What are the critical attributes of quality Extension programs-those things that Extension programs must exhibit to make a visible difference on important problems? During the Summer of 1989, the Minnesota Extension Service (MES) addressed those questions.
Quality Issue
Quality is a major concern of education at all levels. In Minnesota, proposed solutions to the quality problem in higher education are numerous and often controversial.1 What's not always clear is how those policy proposals relate to what we do in Extension.2 But what's clear is that decisions made in the name of quality improvement will shape the expectations that our university partners and external collaborators have of Extension. How we reckon with the issue of program quality will affect the resources that Extension may draw on to support our work on important issues.
To focus our efforts and achieve results, those of us in Extension must be clear about what we mean by program quality. Further, we need to know and agree on what others can expect when we talk about quality Extension programs.
Program Quality Project
The MES program quality project involved three phases. During the first phase, a nine-member work group identified critical attributes associated with high impact programs and developed a preliminary quality statement. The work group used a variation of the critical success factor3 method to identify indicators of program quality. The primary goal was to identify indicators that would serve as guides for priority setting and program action by all Extension actors at all levels of programming.
During the second phase, discussions were held with Extension faculty (agents, department heads, specialists, administrators), county Extension Committee members, and advisory groups. The indicators were then revised based on feedback from those discussions. The third phase was the application of the quality indicators in ongoing priority setting, program decision making, and assessment.
Minnesota Program Quality Statement
The 14 quality indicators and related criteria are described in a brief document, Program Quality Indictors.4 The document lists basic assumptions about the purpose and nature of Extension programs and notes key concepts. Program is defined as a range of purposeful actions intended to make a difference in a problem. Extension programs are time-limited and evolve in overlapping phases. Inherent to each phase is a central question driving decision making and action.
The basic assumption underlying the quality framework is that quality has its roots in how effectively central questions are addressed and linked to desired outcomes, in each program phase (see Table 1). Quality indicators, then, are the principles guiding decision making and action processes; they're practical screens Extension actors can use for setting priorities, channeling resources, and assessing effectiveness as programs unfold.
Table 1. Program quality framework.
Program phase | Central questions | Quality indicators |
---|---|---|
Problem selection | What important societal problems exist that justify MES attention? | Important Focused Grounded |
Commitment | Can MES make a difference in the problem? | Timely/time-limited Credible Capacity |
Strategy implementation | What needs to be done to make a visible difference in the problem? | Results-oriented Responsive Feasible Flexible/adaptive Systematic |
Review/sunset | What did we learn? What should we do next? | Utility Evidence Follow-through |
Problem Selection
Problem selection includes three challenges: (1) selecting an important issue, (2) focusing the issue by clarifying specific problems associated with the issue, and (3) grounding those problems by understanding to what extent and how those problems are exhibited in communities, individuals, and regions. The challenge during problem selection is to move from a generalized issue to a focused problem that can be acted on. If we develop and implement programs based only on a general understanding of a broad issue, we run the risk of underestimating the complexity of the issue and the results we can achieve will be limited. Table 2 notes the quality indicators and criteria for problem selection.
Table 2. MES program quality indicators.
Program phase | Central questions | Quality indicators | Criteria |
---|---|---|---|
Problem selection | What important societal problems exist that justify MES attention? | Important Focused Grounded | * The problem is important. * Extension has an important role to play. * Problem impact indicators specified. * Problem is locally interpreted. * Other efforts addressing the problem are identified and involved. * Problem is recognized and marketable. |
Commitment | Can MES make a difference in the problem? | Timely/time-limited Credible Capacity | * Timing is right-the program is neither too early nor too late. * Program sunset is defined- time limits are specified. * Research base exists, identified, incorporated. * Ethical implications of addressing the problem are considered. * Political implications (risks and benefits) are considered. * Individual and team expertise exist or can be acquired. * Key stakeholders/partners/collaborators are informed and involved. * Financial resources are negotiated. * Accountability means are established. * Means established to monitor the problem. |
Strategy | What needs to be done to make a visible difference in the problem? | Results-oriented Responsive Feasible Flexible/adaptive Systematic | * Program components are implementation logically linked and have the potential for making a difference in the problem within a specified period of time. * Implementation (content, frequency of exposure, and delivery) is sufficient to make a difference in the problem. * Delivery is appropriate for targeted clients. * Responsive action-short lead times. * Program components and implementation fit current or anticipated capacity-it's doable. * Delivery is managed and systematically adapted as conditions change-we know what's going on. Program and problem monitoring information is used to make changes in strategy implementation. |
Review/sunset | What did we learn? What should we do next? | Utility Evidence Follow-through | * Appropriate stakeholders are involved insunset assessment and decision-making processes. * Evidence plan focuses on important questions; questions that will illuminate program results, impact, and support relevant decision making. * Assessment evidence is available at designated program time limit-data collection and analysis is planned for and precedes review/sunset. * Evidence is balanced. Evidence reflects multiple levels of program operation from multiple sources, and changes in problem indicators. * Assessment findings and next step decisions are communicated to clients and stakeholders. * Next steps are accomplished. |
Commitment
An Extension program represents a commitment to make a difference in a problem. In practical terms, that means a commitment of time, credibility, and capacity.
Programs that make a difference in important problems are both timely and time-limited. High-impact programs anticipate a problem or are responsive to an existing problem in a timely manner. In addition, high-impact programs have an end point. Clear, up-front time limits provide practical parameters for resource commitment and strategy implementation.
High-impact programs are credible. Quality Extension programs are based on credible information about important problems and incorporate alternative perspectives when research- based information conflicts. In addition, credible programs address the ethical5 questions and political6 implications associated with the issue.
High-impact programs have sufficient resources. Effective programs require a mix of resources (expertise, time, and dollars) and together those resources represent our capacity to do the work that needs doing. Up-front resource commitment supports programs that become more than good ideas and have the potential to make a difference in problems. Table 2 lists commitment indicators and criteria.
Strategy Implementation
The purpose of program strategy is to make a difference in a problem. Program strategy includes specific approaches and delivery. In total, strategy provides direction for action.
Program strategy may include some or all of the following approaches: technology transfer, information dissemination, problem solving, or long-term development and change efforts.7 Program strategy may be pre-planned, added, and/or adapted during implementation to fit the problem arena, targeted clientele, or changes in the nature of the problem. Program strategy includes, but goes beyond, content and method decisions, design of educational products, facilitation, and program management.
High-impact programs take strategy decisions and implementation seriously. The primary concern surrounding all strategy decisions is how well program strategy fits the problem and whether strategy actions take us closer to results. Table 2 lists the quality indicators and criteria for strategy implementation.
Review/Sunset
High-impact programs don't fade away unnoticed. We thoughtfully assess programs, make decision makers aware of what we've accomplished, and take appropriate follow-through action.
Utility underscores and links the practical and purposeful nature of sunset tasks: assessment, decision making, and action. Useful assessment provides empirical grounding for judging results and deciding what to do next. Consequently, program actors link assessment questions to important decisions. Assessment questions form the basis of evidence plans.
Evidence specifies data quality. Sunset assessment requires balanced evidence about multiple program levels from multiple sources, and changes in problem indicators.
Follow-through recognizes the active nature of sunset. It's important to communicate what we learned, fulfill our public accountability responsibility, and initiate next steps. Next steps may include program modification (and recommitment to the problem), transferring the program to other groups or agencies, or terminating the program. Table 2 lists the quality indicators and criteria for review/sunset.
Feedback and Current Use
The program quality indicators were shared with Extension faculty and partners. Extension partners indicated that the quality indicators clarified what they should expect from MES and what questions they should ask about Extension programs to enhance program effectiveness.
-
This gave me a more direct view of how I, as a committee member, can use
my view to evaluate the impact of MES in Minnesota. It's good to ask
hard questions. The program quality indicators showed that our plan of
work programs are too broad-not well-defined. (county Extension
Committee member)
Made me aware that I need to ask more questions. Now I know what to ask. (county Extension Committee member)
I'm glad to see that Extension is addressing the quality issue. This helps me understand Extension. (department head)
The quality indicators are now used by many county Extension Committees as screens for priority setting and planning programs. In addition, the quality indicators provide a legitimate arena for Extension Committees and county faculty to collaboratively address hard questions-for example, how many programs can be on the agenda if we're serious about making a difference on important problems? One group of counties used the indicators to focus the Extension program agenda and align resources with priorities.
The program quality indicators are used at the state level as well. The quality indicators were adopted by the program development task force as the basis for reformulating MES program planning processes. The program quality indicators also guided our work on reformulating both the content and process of county program reporting. Our new reports serve as briefing reports to local decision makers. Our reports now provide information decision makers need to know to make decisions and improve program effectiveness.8
Summary
The purpose of the Minnesota program quality project was to clarify to ourselves and others what we mean by program quality. The program quality indicators clarify our mission by focusing priority setting, resource commitment, and program implementation actions on outcomes. In other words, quality means results.
Footnotes
1. Maintaining Minnesota's Educational Advantage: An Analysis of Future Higher Education Needs and Alternative Strategies to Address Them in Minnesota (St. Paul: Minnesota Higher Education Coordinating Board, The MSPAN Project, 1989).
2. Irwin Feller, Universities and State Governments: A Study in Policy Analysis (New York: Praeger, 1986) and Frank Newman, Choosing Quality: Reducing Conflict Between the State and the University (Denver, Colorado: Education Commission of the States,
3. John F. Rockart, "Chief Executives Define Their Own Data Needs," Harvard Business Review, LVII (March 1979), 85-93; Les Garner, "Critical Success Factors in Social Services Management," New England Journal of Human Services, VI (No. 1, 1986), 27-31; and Marsha R. Mueller, "Critical Success Factors" (Paper presented at the Annual Meeting of the American Evaluation Association, Boston, Massachusetts, 1987).
4. Marsha R. Mueller and others, Program Quality Indicators (St. Paul: University of Minnesota, Minnesota Extension Service, 1990).
5. Issues involve conflict. Making a difference on important problems requires not only technical knowledge but understanding of multiple value perspectives.
6. It's important to recognize existing vested interests in the problem arena to understand the potential for successful implementation of Extension programs.
7. Michael Q. Patton, "Extension's Future: Beyond Technology Transfer," Knowledge: Creation, Diffusion, Utilization, IX (No. 4, 1988), 480-82.
8. Sandra Becker and Marsha R. Mueller, Writing Your Monthly Report (St. Paul: University of Minnesota, Minnesota Extension Service, 1990). Quality and Excellence