August 2003 // Volume 41 // Number 4 // Commentary // 4COM1

Previous Article Issue Contents Previous Article

Program Evaluation: Use It to Demonstrate Value to Potential Clients

Program evaluation is a powerful tool for demonstrating the value of Extension education to stakeholders. When presenting the results of evaluation, it is important to know exactly who the stakeholder is. As programs increasingly depend on client registration fees, it becomes essential to demonstrate to clients that they will receive return on their investment. This article points out an opportunity for Extension to improve programs and marketing by focusing evaluation to meet the decision needs of business organizations. Core evaluation articles and reports of successful Extension examples are reviewed.

Richard Stup
Senior Extension Associate, Dairy Alliance
Department of Dairy and Animal Sciences
Penn State University
University Park, Pennsylvania
Internet Address:


Program evaluation is a powerful tool for demonstrating the value of Extension education to stakeholders. When presenting the results of evaluation, it is important to know exactly who the stakeholder is. As programs increasingly depend on client registration fees it becomes essential to demonstrate to clients that they will receive return on their investment. This article points out an opportunity for Extension to improve programs and marketing by focusing evaluation to meet the decision needs of business organizations.

Who's Paying for This?

At least four articles in the Journal of Extension from October 1998 to April 2002 opened with a common theme (O'Neill, 1998; Radhakrishna & Martin, 1999; O'Neill & Richardson, 1999; Bailey & Deen, 2002). The message is that increased accountability for results demanded by public funding sources means Extension must increase evaluation efforts in order to generate systematic and convincing evidence of programming value. The assumption is that better demonstration of value will lead to the continuation of funding.

While this focus on improving evaluation to better demonstrate positive results to public decision-makers is commendable, it ignores another critical group, potential program participants from business organizations. Program registration fees are an increasingly important source of funding for Extension education. Thus, business managers and employees are important not only as the target audience, but also as a source of funding. Business organizations could use program evaluation information to better choose training for employee development purposes.

What Do Business Managers Want to Know?

Business managers regularly select and purchase training opportunities for themselves and their employees. A market exists for high-quality training, but Extension is not the only provider; private organizations are rapidly emerging to meet the need (King & Boehlje, 2000; Stup, Van Saun, & Wolfgang, 2002). There are risks associated with sending employees to a training program. The business manager must pay the registration fee and often the employees' wages while they attend the program. With such a substantial investment, managers want to know that the training will be worthwhile.

One of Extension's advantages over private competitors in the information marketplace is the ability to use evaluation techniques to demonstrate program effectiveness. Evaluation data that shows a connection between Extension training and improved individual or organizational performance can be a powerful marketing tool. Therefore, Extension professionals should prepare evaluation information for business managers who choose which training opportunities to purchase for employee development.

Levels of Evaluation

The classic model of evaluation proposed by Kirkpatrick (1996) is well adapted for gathering the information needed by business managers. Kirkpatrick described four steps in evaluation:

  • Level 1: Reaction. How did participants like the training experience?
  • Level 2: Learning. What did participants learn as a result of the training?
  • Level 3: Behavior. How does on-the-job behavior change as a result of the training?
  • Level 4: Results. What benefits (greater efficiency, higher production, better quality, less employee turnover) does the organization gain from the training?

While it is important for participants to enjoy a training program, a business manager is not likely to be concerned about reaction information. He or she will be somewhat more interested in learning that takes place and keenly interested in behavior changes and results that come from training. Unfortunately, most evaluation stops at reaction and learning without measuring behavior and performance changes.

Behavior Change

A change in workplace behavior is more difficult to measure than learning or reaction (Kirkpatrick, 1996; Dixon, 1996). Even large corporations only evaluate behavior change on a selective basis because it is expensive and requires a custom design for each program (Dixon, 1996). There are, however, techniques that Extension professionals could adapt to meet their evaluation needs.

Kirkpatrick (1996) offers these guidelines for evaluating behavior change:

  • Measure behavior both before and after training.
  • Use multiple sources of information about behavior. These might include the trainee, a supervisor, peer, subordinate, or others who are in a position to observe.
  • Conduct statistical analysis to determine that a difference really exists between before and after behaviors.
  • Measure the "after" behavior 3 months or more after the training.
  • Use a control group.

While all of these guidelines contribute to the scientific strength of an evaluation study, it is possible to get meaningful results without meeting all of the guidelines. A level three evaluation was designed to assess the effectiveness of technology training in the Department of Defense (Okurowski & Clark, 2001).

The researchers studied a department where a new, specialized software package was introduced. Some of the users attended a 1-day class on the new software, some received a brief tutoring sessions with support people, and some chose to learn the software on their own. Users were compared by how much they used the software, how sophisticated their questions to the help desk were, and how effectively they used the software. Those users who attended the 1-day class used the software more frequently, asked more sophisticated questions, and were more productive than their counterparts.

In a very practical Extension example of behavior change evaluation, participants in food safety training were surveyed about their behaviors as a result of training (Martin, Knabel, & Mendenhall, V., 1999). Two months after the training, evaluators simply asked participants if they practiced certain safe food handling behaviors. The possible response categories for each practice may be paraphrased as follows:

  • Did the practice already before training
  • Plan to do because of training
  • Do because of training
  • Probably won't do the practice

This is a simple and straightforward way to measure behavior change as a result of an Extension training program. Variations of this type of evaluation could be used to gather self-reported information about behavior change as a result of a wide range of Extension programs.

Performance Change and Return on Investment

The highest level of evaluation in the Kirkpatrick model is performance change. This is information that potential participants and their employers need to know. Basically, the question it seeks to answer is this:

If an individual attends the training and practices newly learned behaviors at work, will there be positive performance results for the individual and the organization?

Once this question is answered, a return on investment (ROI) may be calculated by assigning dollar value to the performance change and then dividing that value by the cost of the training.

A few Extension programs have calculated ROI for accountability purposes with traditional government funding sources. O'Neill (1998) reported that a personal finance training program in New Jersey called "Money 2000" resulted in a very large ROI. Participants learned how to better measure their finances and reported their successes back to the Extension professionals. They documented this self-reported data and showed over one million dollars in benefits to the combined participants.

O'Neill and Richardson (1999) discuss ROI calculations in detail. Their focus, however, is on an audience of public officials. Simply adjusting their calculation of program benefits and costs from an aggregate to an individual basis can yield ROI information that is meaningful for business decision-makers.

Calculating the ROI of training is not always clear cut. Rowden (2001) offers a detailed explanation of rigorous methods for calculating ROI. He points out that a significant problem arises when one tries to separate the effects of training from other influences that might cause a performance change. Chmielewski and Phillips (2002), on the other hand, offer an extensive set of strategies to isolate the effects of training.


Extension's role is changing. Target audiences are still in need of high quality training programs, especially in disciplines that are not well served by private industry. Unfortunately, public funding sources are level or in decline, and fee-based training programs are becoming more important. Extension can turn away from this challenging environment and limit training to only that which can be done with public funds, or it can embrace a new user-supported approach.

For those educators who choose to pursue user-supported training, evaluation will become more important than ever. Business managers will demand that they receive solid information on which to base their decisions. These demands may create an environment that encourages Extension educators to produce outstanding training opportunities that will help people to improve their performance at work. That would be a great return on investment for training participants, businesses, and Extension educators alike.


Bailey, S. J., & Deen, M. Y. (2002). A framework for introducing program evaluation to Extension faculty and staff. Journal of Extension [On-line], 40(2). Available at:

Chmielewski, T. L. & Phillips, J. J. (2002). Measuring return-on-investment in government: Issues and procedures. Public Personnel Management, 31, 225-237.

Dixon, N. M. (May 1996). New routes to evaluation. Training and Development, 50, 82-85.

King, D. A., & Boehlje, M. D. (2000). Extension: On the brink of extinction or distinction? Journal of Extension [On-line], 38(5). Available at:

Kirkpatrick, D. (January 1996). Great ideas revisited: Techniques for evaluating training programs. Training and Development, 50, 54 -59.

Martin, K., Knabel, S., & Mendenhall, V. (1999) A model train-the-trainer program for HACCP-based food safety training in the retail/food service industry: An evaluation. Journal of Extension [On-line], 37(3). Available at:

O'Neill, B. (1998). Money talks: Documenting the economic impact of extension personal finance programs. Journal of Extension [On-line], 36(5). Available at:

O'Neill, B., & Richardson, J. G. (1999). Cost-Benefit impact statements: A tool for extension accountability. Journal of Extension [On-line], 37(4). Available at:

Okurowski, M. E. & Clark, R. (2001). The use of level three evaluation data to assess the impact of technology training on work performance. Performance Improvement Quarterly, 14, 57-76.

Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability training needs of extension agents. Journal of Extension [On-line], 37(3). Available at:

Rowden, R.W. (2001) Exploring methods to evaluate the return on investment from training. American Business Review, 19, 6-12.

Stup, R., Van Saun, R. & Wolfgang, D. (2002). A promising new role for extension educators in a dynamic industry: The cow sense project. Journal of Extension [On-line], 40(6). Available at: