January 1984 // Volume 22 // Number 1 // Feature Articles // 1FEA1
Practical Benefits of Evaluation: An Example
Abstract
The authors present an illustrated step-by-step approach to evaluation using an example of promoting change through newspaper articles.
Planning, implementing, and reporting evaluations can have many practical benefits if the process produces credible results. Planning evaluations before programs are implemented can increase the impact of programs by revealing objectives that have no planned activities and those with insufficient program depth to bring about desired change. Targeted revisions in these areas can increase the possibility for impact to occur. Implementing evaluations can identify ineffective programs and thus permit reallocation of valuable resources to new needs and/or to effective programs. Reporting evaluation results can increase citizen interest and awareness of programs and indicate that public resources are being used wisely.
This article: (1) illustrates the key steps in the evaluation process, (2) reports the findings from the evaluation, (3) demonstrates the importance of planning evaluations before programs are implemented, and (4) encourages the use of evaluation techniques as decision-making tools for program development.
Program Description
A series of five newspaper articles designed to help consumers use their grocery dollars wisely was run in consecutive issues of a weekly newspaper in May and June, 1981. One agent of the six involved in a multi-county resource management project ran the series before the other five. The articles suggested ways to:
- Assess present practices (unnecessary and/or wasteful purchases).
- Compare nutritional value and prices of quick- to-fix and cooked-from-scratch foods.
- Use cents-off coupons wisely.
- Save time and money in food preparation (making your own master mixes).
- Compare nutritional value and prices of the typical fast-food items (hamburgers, fish sandwiches, french fries) with those made at home.
Evaluation Purpose
Deciding the exact purpose of the evaluation may be the most important part of planning an evaluation because the answer sets parameters for other steps.1 Two substeps need to be carried out:
- Decide who needs or wants evaluation data. Is it just the agent or the advisory committee or the state office or all three?
- Determine why the evaluation is desired. What decisions will the results affect?
The purposes of the newspaper study were: (1) to determine if readers made practice changes or gained knowledge as a result of reading the articles and (2) to determine if any of the articles needed revision. The first purposeimpact-was of interest for accountability and, from a professional viewpoint, for the advancement of knowlede. Readership of newspapers has been studied extensively in terms of who reads and for what reasons, how much of what content is read, when reading takes place, the amount of time spent reading, and reader gratifications. However, few of the variables that have been studied are directly useful to Extension educators interested in clientele practice change. And, with Extension's four-year reporting system's emphasis on impact and accountability, more results-oriented data Must be acquired.
The decision to be made was whether the other five agents in the team would run the series in their local papers and, if so, what revision was required. It was primarily the agent team and district administrator who wanted the information. And, with Extension's four-year reporting system's emphasis on impact and accountability, more results-oriented data Must be acquired. The decision to be made was whether the other five agents in the team would run the series in their local papers and, if so, what revision was required. It was primarily the agent team and district administrator who wanted the information.
Measurement-What and How
For each decision descriptions should be provided for the type of information (indicators Of success/failure) that will be convincing evidence of accomplishment and a,for example, mail questionnaire, personal interview, telephone procedure determined for gathering that information interview, or observation. Decisions in this study required data on: (1) which articles were read and by how many and (2) how many made practice changes or learned money-saving techniques as a result of reading the articles. Indicators were listed and nine questions prepared for a one-page mail questionnaire.
Standards for Success
Performance standards should be set for each question and/or for each instrument or questionnaire. (Without standards, evaluation is a description exercise.) This setting of performance standards isn't an easy exercise. Identifying standards and criterion cut-off scores are both matters involving value judgments and, in most instances, that judgment is arbitrary. A 50% readership cut-off was set because the costs for disseminating the information through a newspaper were minimal so the number of people affected could be low and still balance favorably with costs. Also, the subscription list contained names of businesses and thus man papers wouldn't reach the intended audience (women). A 75% standard for reader change was set because the agent felt that if readership were low, change per reader needed to be fairly high for the cost/benefit ratio to be balanced.
Design
A design specifies when and from whom data will be collected and with whom/what results will be compared. Sometimes research designs are used, but they aren't always sufficient for answering evaluation questions. Some designs require sampling . But whether they do or not, sampling should always be considered because it can cut costs, increase the speed with which data are collected and summarized, and increase the accuracy of results.6
The newspaper had a subscription list of 1,500 names. A random sample (N = 320) of within-county subscribers was selected systematically from this list to yield a precision rate of 5% for a confidence level of 95%.
The posttest-only design was used. One month after the fifth article appeared, a one-page questionnaire with an individually typed cover letter was sent to the sample of subscribers. A second (and third) letter and questionnaire were sent to those not responding to the f i rst (and second) letters.
Data Analysis
The purpose of this step is to decide, in advance of data gathering, how all the numbers will be handled to accurately answer the questions of interest. It's important to do in advance because the method of analysis many times will affect the form in which data are gathered. This step can further help clarify what needs to be learned in an evaluation.
The statistic used in the newspaper project was very simple- percentages of the number of respondents who indicated having seen the articles, of the number of articles read, and for each article, the percentage making practice change or knowledge gain.
Cost Estimation
The costs involved in evaluation are difficult to accurately predict. However, it's a step that should be done thoroughly before evaluation plans are implemented. Some projects get stopped prematurely (which means that all the resources are wasted) due to poor planning for time and other resource costs. Some of the costs off the top are postage, telephone, travel, purchased measuring tools, printing, consultants, and office overhead. Generally, the greatest cost is staff time and it's the hardest to accurately predict. In the example, 15 days were planned for staff and 6 for secretarial assistance. Other costs anticipated were: post-age, $175; stationery, $7; telephone, $20; printing, $40.
Implementation Plan
A strategy should be outlined to assure that the evaluation is implemented properly. Well-designed studies that are poorly implemented become bad studies. A step-by-step plan should be developed for who is to do what, when, and the necessary resources to complete the job. Part of the basic plan for the tasks for this project were:
What | When to complete | By whom |
Sampling plan | Last week, May | State evaluation specialist |
Sample subscribers selected | Last week, June | County home economics agent |
Draft questionnaire and cover letters | Second week, June | State evaluation specialist |
Communication of Results
Often the results of evaluations of educational and human service programs go undisseminated or are shared with only a select few. But, if evaluation results are to be used, they must be reported in a timely manner to the people who have the ability and interest to use them. They must be communicated in ways that cause this group to notice and want to use them. Reports should explain the results of the study and indicate if standards were met, compare the results to other literature, and discuss limitations of conclusions.
Study results were summarized within six weeks of receipt of the last questionnaire and communicated orally to the agent who helped implement the study. About six to eight weeks later, an informal report was made to the other agents in the multicounty project and the district administrator. About a year later, a report was distributed intrastate to Extension administration and home economics Extension agents.
Results were as follows:
- The actual costs came close to those projected in staff time, telephone, and printing. Postage was underestimated ($15) as a result of not planning for the mailing of the report to all county offices.
- The overalI response rate was 64%. The first standard set for the study (50% read at least 1 article) was exceeded. The range of readership was from a high of 69% for the first article to a low of 47% for the final article; 90% read more than 1 article and 23% read all 5. The second standard (75% of readers of an article report practice change or knowledge gain) was exceeded by 2 articles, but not met by 2; 97% of readers of article one had implemented at least 1 of its recommended practices and 57% had implemented 2-4; 37% of article two readers reported changing their habits; and 80% of article three readers and 65% of article five readers reported knowledge gain. (No questions were included for article four).
These results are much higher than found in a similar extension study where only 5.6% of the respondents read at least 1 article and 2.1% made changes. However, there are limitations to concluding that the newspaper articles caused the individuals to make the changes they reported. Respondents could have known the information before reading the articles. They could have learned the ideas from other sources during the time the articles were appearing in the paper. They could have reported change that hadn't occurred.
If these validity threats weren't present, the full impact of the articles could be summarized as follows: If the assumption was made that all non-respondents did not read the articles and that the sample (320) represented the population of readers, a conservative estimate of the full impact of these articles would be that 3,400 people ± 5% read at least 1 of the articles and 2,300 ± 5% made changes that increased the nutritional value of their purchases and/or saved them money. A more generous estimate would assume that the respondents (202) represented the population of readers. If so, the number reading at least 1 article could be as high as 5,200 ± 7% and the number making changes, 3,600 ± 7%.
Conclusion
The planning of this evaluation showed some articles to be weak in offering measurable ways make changes or gain knowledge. Had the planning occurred before the series was begun, targeted rewrites could have increased the possibilities for measurable impact to occur. Implementation of the study showed that overalI the series was effective, but that individual articles could be improved. Comments from respondents provided valuable insight for revising specific articles by other agents desiring to use the series. Reporting the evaluation has increased the awareness of the resource management project by all Extension staff in the state. And, the report of the study on the front page of the newspaper that printed the articles should have heightened the awareness of all citizens of Extension's interest in being accountable for its use of taxpayer funds.
Footnotes
- S. B Anderson and S. Ball, The Profession and Practice of Program Evaluation (San Francisco: Jossey- Bass, 1978).
- For example, see R. F. Carter, "Communication Behavior" (Paper presented to the Association for education in Journalism, Fort Collins, Colorado, August, 1973); L. B. Becker, "Two Tests of Media Gratification: Watergate and the 1974 Election," Journalism Quarterly, Lill (Spring, 1976), 23-29; J. E. Grunig , "Time Budgets, Level of Involvement and Use of Mass Media," Journalism Quarterly, LVI (Summer, 1979), 248-61-1 G. C. Stone and R. V. Wetherington, Jr., "Confirming the Newspaper Reading Habit," Journalism Quarterly, LVI (Autumn, 1979),554-56; and K. R. Stamm and M. D. Jacoubovitch, "How Much Do They Read in the Daily Newspaper: A Measurement Study," Journalism Quarterly, LVIII (Summer,1980), 234-42. For a more extensive survey of theoretical work, see E. Katz and J. G. Blumler, eds., The Uses of Mass Communications (Beverly Hills, California: Sage, 1974).
- G. V. Glass, "Standards and Criteria," Journal of Educational Measurement, XV (Winter, 1978),237-61.
- D. T. Campbell and J. C. Stanley, Experimental and QuasiExperimental Designs for Research (Chicago: Rand-McNally, 1963).
- D. L. Stufflebeam, "The Use of Experimental Design in Educational Evaluation,' , Journal of Educational Measurement, Vill (Winter, 1971),267-74.
- S. Sudman, Applied Sampling (New York: Academic Press, 1976).,
- M. M. Tien, "Toward a Systematic Approach to Program Evaluation Design," IEEE Transactions on Systems, Man and Cybernetics, SMC-9 (September, 1979), 494-515.
- Anderson and Ball, The Profession and Practice of Program Evaluation.
- "Social Factors Influencing the Effect of Mass Media in a Coordinated Approach to Teaching of Home Economics Subject Matter," Experiment Station Research Project No. 430 (Columbia: University of Missouri, 1966).
- The original sample was selected for an accuracy rate of -t 5%. A conservative estimate would be calculated as follows: percentage of total sample (320) who read at least 1 article (126 or 40%) x 2,100 potential newspaper buyers (1,500 subscribers + 600 papers sold in machines) x readership multiplier (4) = 3,360 ± 5%. Percentage of total sample who reported at least 1 practice change (86 or 27%) x 2,100 x 4 2,268 ± 5%.
- A return rate of 202 yields a precision rate of t 7%. The more generous estimate would be: percentage of respondents (202) who read 1 article (126 or 62%) x 2,100 x 4 = 5,208 ± 7%. Percentage of respondents making at least 1 practice change (86 or 43%) x 2,100 x 4 = 3,612 t 7%.
- County Record, Blountstown, Florida, December 16, 1982, P. 1.
Accepted for publication: July, 1983.