Spring 1991 // Volume 29 // Number 1 // Feature Articles // 1FEA2

Previous Article Issue Contents Previous Article

Criteria for Judging Excellence

Abstract
This article examines difficulties that have discouraged the development of criteria, identifies benefits criteria can provide, and proposes three criteria for assessing both Ag Experiment Station research and Cooperative Extension programs.


M. F. Smith
Associate Professor and Coordinator
Program Planning and Evaluation
Staff Development
University of Maryland-College Park


Although there's little doubt that research and Extension provide substantial benefits to the public, criteria to assess these two functions haven't been identified and agreed on by the professionals who do the work and the public that supports it. This article examines difficulties that have discouraged the development of criteria, identifies benefits criteria can provide, and proposes three criteria for assessing both Ag Experiment Station research and Cooperative Extension programs.

The two functions, research and Extension, are but different points on a continuum that enhances the creation and distribution of well-being and wealth in this country. (Wealth is defined as things that provide people with sustenance, comfort, convenience, and pleasure.1) Research contributes to economic progress by finding ways to do old things better and by spawning new goods and services; Extension contributes by identifying clients who have need for and/or are able and willing to use the new techniques or products and by making the information available in a useful format.

Why Criteria Are Lacking

The nature of research itself suggests at least three reasons for a lack of evaluative criteria.2 First, the time between expenditures for research and public acceptance of any resulting technology can be long, sometimes measured in years. By the time research outcomes can be properly assessed, scientists have usually moved on to new research. Second, basic research efforts usually contribute only a portion of the total required to get new knowledge fully developed and institutionalized in the public arena and any number of external factors can interfere with getting intended products or services to this stage. Third, fundamental research can be totally unpredictable in its outcomes (often with serendipitious results). Researchers contribute a fourth reason by vigorously opposing judgment and evaluation of their work by other than peer scientists.

Extension has similar problems that contribute to this lack of agreed-on outcome measures:

  1. Time between programming efforts and client/situation change can be long (for example, in youth development and water quality efforts).

  2. Extension may contribute only a portion of the totality of knowledge required for a client or situation to change, and clients may not themselves be able to assess Extension's role in their decisions. Other agencies often work in some way with some of the same problems as Extension.

  3. Social science methodology for demonstrating differences isn't as exact as that for the physical sciences.

Benefits of Having Criteria

If appropriate criteria are identified for judging excellence, research and Extension efforts will benefit. Such criteria should:

  1. Increase aspirations-serve as models for achieve- ment.

  2. Provide measures of performance.

  3. Enhance the decision-making process of public officials.

Public officials want to know the programs they support are meeting needs and their constituents are benefiting. The criteria we provide should help them make decisions about such matters and help us rally consensus and support from them and our other powerful and critical stakeholders. These criteria should be sufficiently rigorous to continually challenge educators and researchers to do better work; and be sufficiently practical to suggest measurable indicators of performance.

The Criteria

Three criteria are proposed here for judging the excellence of research and Extension efforts: relevance, quality process, and utility. Relevance means the focus of the program or research is appropriate-based on need or expected return. Quality process means a credible procedure or process is followed. Utility means there's an outcome that is or is expected to be of use (for example, in an indirect or incremental fashion as when adding to our reservoir of knowledge about what is, what works, and what does not; and/or directly, as when some situation or need has been resolved).

In the past, Extension depended most on relevance for its accountability, meaning that what the program was working on was justification for its existence. Research has most often used quality process as its criterion of excellence and this has been measured by peer-reviewed journal articles. However, in this age of scarce resources and increasing public scrutiny, a quality process and a focus on need are no longer sufficient justification for continuing or increasing public and private funding-there must be some indication of intended and recognizable socioeconomic benefits. These don't have to be as immediate for basic research as they must for Extension. But even for basic research, some link should be discernible.

The criteria, their individual characteristics, and examples of performance indicators are listed in Table 1 for research and Table 2 for Extension. The characteristics are meant to be sufficiently broad in orientation and scope to be applicable to the many and varied research and programming situations; the indicators are meant to be sufficiently specific to prevent misunderstanding of intent. The overall criteria and the defining characteristics may be applicable to most all research and Extension efforts, whereas the indicators will vary depending on the state of knowledge in an area and/or on anticipated outcomes (different programs have different intended outcomes and thus will have different indicators of successful performance).

Table 1. Criteria for excellence in agricultural research.3

Characteristics Indicators
I. Relevance
1. Addresses identified state, regional,
national problem/need
OR
has identified link to public use or welfare
* Funds received from public and/or private sources.
* Covered in legislative mandate and/or institutional research priorities.
* Insufficient private industry effort or results.
II. Quality Process
1. Scientific credibility. scientists.
2. Clear research objectives, valid and reliable methodology.
3. Reports include implications for next steps.
4. Efficient use of resources.
* Credentials of
* Peer-reviewed project proposals.
* Publications in refereed journals.
* Times referenced in later studies.
* Number of "lay person" publications.
* Additional funds received.
III. Utility
1. Provides usable results, that is, adds to
knowledge in incremental or breakthrough
manner, resolves scientific controversy, or
solves a problem.
2. Contributes to improvement and/or
maintenance of scientific expertise of
older staff.
* Creation of new and/or improved (more effective or efficient) products/services.
* Creation and/or expansion of business enterprises.
* Number of references in later research efforts.
* Number of articles/texts produced for and used in resident instruction.
* Additional funds received.
* Articles in news (in Chronicle of Higher Education, on CNN "Science and Technology Week," etc.).
* Number of graduate students attracted/supported.
* Number of requests for scientists to act as consultants to industry and/or to other educational institutions.
* Funds received, publications, and consulting opportunities of older staff.

Table 2. Criteria for excellence in Extension programs.

Characteristics Indicators
I. Relevance
1. Addresses identified (local, regional, or state)
need, situation, or concern in timely manner.
2. Addresses need that is amenable to change
with education.
* Appropriate others (clients, resource providers, legitimizers) involved in definition of program focus.
* Characteristics and numbers as evidence of intensity/pervasiveness of problem/need.
* Included in institutional/system priorities.
* Funds received from public and/or private sources.
* Need not being sufficiently met by other agencies or private industry (important niche in network for Extension).
* Ameliorative actions are identifiable that can be taken with appropriate information/knowledge.
II. Quality Process
1. Professional credibility.
2. Plausible program plan.
* Credentials, competencies of professionals and staff.
* Program objectives identified and clear.
* Activities identified and sufficient (type, quantity, and sequence) to exert planned influence.
* Resources identified and allocated (type, quantity, and quality) to implement planned activities.
* Efficient use of resources.
* Appropriate/sufficient disciplines involved in planning and implementation.
* Outcomes clearly identified.
* Identified indicators of how performance can be measured (criteria and standards).
* Identified sources to provide evidence of performance.
III. Utility
1. Provides useful and used results, that is,
contributes to knowledge, solves a problem,
or helps clients improve their quality of or
situation in life.
2. Contributes to improvement and/or maintenance
of expertise of staff.
* Number of intended audience that participate.
* Participants report results are useful and used.
* Number of results for publications or help.
* Public funds maintained/increased.
* Creation of new and/or improved (more effective or efficient) products/services.
* Creation and/or expansion of business enterprises.
* Private funds received.
* Newspaper and other media coverage solicited and nonsolicited.
* Evidence of goal achievement.
* Funds received, publications, and requests for help of staff.

In Conclusion

This article has presented three criteria for assessing the efforts of land-grant research and Extension efforts. These criteria are easy to understand yet rigorous in intended outcome. They should be helpful internally in our efforts to constantly increase public benefits from our work and externally in our efforts to let the public know where we stand and how we're doing.

Footnotes

1. M. R. Langham and J. C. Purcell, "Economic Evaluation of Postharvest (Marketing) Research: Conceptual and Empirical Issues" (Paper presented at National Symposium on Evaluating Agricultural Research and Productivity sponsored by IR-6 and the

2. A. J. Barbarie, "Evaluating Government R&D: Beyond 'Quality of Research,' " in J. S. Wholey, M. A. Abramson, and C. Bellavita, eds., Performance and Credibility: Developing Excellence in Public and Nonprofit Organizations (Lexington, Massachusetts: Lexington Books, 1986), pp. 109-15.

3. Developed in consultation with Brian Gardner, associate director, AES, University of Maryland, College Park.