December 2007 // Volume 45 // Number 6 // Ideas at Work // 6IAW1

Previous Article Issue Contents Previous Article

Pre- and Post-Testing with More Impact

A novel approach to the use of pre- and post-testing was used to take the element of guessing at answers to test questions out of this method to better quantifying knowledge gained by participants in a workshop. The approach showed a 10% increase in knowledge gained by participants over the traditional method of tallying pre-/post-test. The method also provides instructor feedback to improve the workshop content and allow him or her to better gauge the time needed for program components, measure confidence in answers, and identify where incorrect answers are actually thought to be correct by participants.

Greg La Barge
Assistant Professor/Extension Educator
Agriculture, Natural Resources & Community Development
Ohio State University Extension-Fulton County
Wauseon, Ohio


We have all done it! On a multiple choice question we carefully look over our answer options. Using a process of elimination, we identify two of four potential answers that seem correct to us. Yet we are unsure of the correct answer with our current knowledge. But we have been told since elementary school to mark an answer, even if we are guessing, because there is a probability we will get some questions correct. We then circle an answer, letting the odds play out. If someone asks, "Did you really know the answer?" we would honestly say, "No, it was a lucky guess."

Pre-/post-test procedures are a commonly used method to evaluate learner outcomes of educational programs. This procedure provides feedback to the instructor by measuring the initial knowledge level of the learner and what knowledge the learner gained from the workshop or presentation. In a perfect world, a participant would not answer any question they did not know the answer to. As instructors, how do we account for the natural tendency to mark an answer to every question and the associated probabilities of getting a question correct? By ignoring this reality we do not truly measure the learners' baseline knowledge and account for what we accomplish through instruction.

Another intuitive concept is can we measure the confidence a person has in applying his or her knowledge by identifying a guessed correct answer from an answer where the person actually felt they knew the correct answer? Conversely, can we also find situations where a participant marks an answer for which he or she is confident enough to say is not a guess, but the answer is wrong? One could argue that correcting this occurrence is maybe the greatest impact of our teaching.

A novel approach to pre-/post-testing was used during six Soil Fertility Workshops to help answer some of these basic questions (Figure 1). The pre-/post-testing method used attempted to better quantify the learners' baseline knowledge and what they gained from their workshop participation. Alliger and Horowitz originally described the concept in 1989. A pre- and post-test was designed around the workshop content. Each question was then given a qualifier question of "Are you guessing?," to which the participant could answer "Yes" or "No." This allowed participants to satisfy their need to answer all the questions while giving the instructor additional information on whether the answer was a lucky guess or whether they were applying previously gained knowledge.

Figure 1.
Example Layout of a Pre-Test with a Guessing Qualifier Question Added

Example Layout
of a Pre-Test with a Guessing Qualifier Question Added

Materials and Methods

The Soil Fertility Workshop was presented at six locations in Ohio during 2002 and 2003. Workshop participants included farmers, ag chemical/fertilizer dealers, and crop consultants, including those with Certified Crop Advisor certifications. Pre- and post-tests consisting of 13 matched true/false and multiple choice questions were designed to test similar areas of knowledge with each pre-/post-question set. The participants were asked to complete and turn in the pre-test before any instruction began. The post-test was collected at the workshop conclusion. Participants were asked to provide the last four digits of their telephone number on both test to allow the test to be matched.

A total of 140 participants attended the six workshops, with a total of 67 valid, matching pre-/post-comparisons available for analysis. The data for each question was entered in Microsoft Excel and analysis conducted using SPSS Version 11.0. One poorly worded question was thrown out of the analysis resulting in 12 questions used for the comparison.

Data was analyzed in two ways. Method one was a "Traditional" correct/incorrect tally. Method two considered the answer to the qualifier question with a Correct "Knew" and Correct "Guess" tally. Any question where the participant indicated they were guessing was counted as incorrect for the qualifier method of determining the number of correct responses.

Results and Discussion

Results in Table 1 show the percentage of participants with correct answers using the "Traditional" method compared to the qualifier method of Correct "Knew" and Correct "Guess." Under the "Traditional" method, the participants average pre-test score was 46% correct for the 12 questions. The post-test results showed a correct response of 88%. Thus, the knowledge gain for participants in the soil fertility workshop was 42% based on the "Traditional" scoring.

Table 1.
Percentage of Pre- and Post-Test Correct Results using Traditional and "Are you guessing qualifier?" Valid Number =67 Matched Pre- and Post-Test

 Pre-Test ResultsPost-Test Results
Question% Correct Traditional% Correct Selecting "Knew"% Correct Selecting "Guess"% Correct Traditional% Correct Selecting "Knew"% Correct Selecting "Guess"

If the participant's answer of "Yes" to the qualifier question of, "Are you guessing?" is counted as an incorrect answer, impact of instruction increases. The actual impact statement of knowledge gained by participants in the soil fertility workshop becomes, "The workshop participants increased their post-test score by 52% to an average of 83% when compared to their pre-test score average of 31%." The qualifier of "Are you guessing?" increased the knowledge gained by 10% over the "Traditional" counting method. Alliger and Horowitz noted a 15% difference in the knowledge gained measurement when comparing the qualifier method to the traditional method.

The qualitative factor of increased confidence in the knowledge is another impact that can be measured from the qualifier method. Guessing, whether the answer was correct or incorrect, was reduced from 53% pre-test average to a 10% post-test average (data not shown). The workshop significantly increased the confidence that participants had in the answers they gave on the post-test.

Evaluating the individual question results in Table 1 also provides valuable feedback to the instructor(s) as they refine the workshop for future audiences. The workshop content represented in Q6 and Q10 that generated the highest percentage under Correct "Knew" in the pre-test may require less instruction time. A topic like that represented in Q7, which generated a low initial Correct "Knew" percentage of 7% on the pre-test as well as a lower post-test score of 64%, may require more time or a different method to explain the concepts surrounding this question.

The qualifier method also helps identify areas that initially look like they should receive equal emphasis but the guessing qualifier shows a difference in confidence participants have in their answer. We would put the pre-test Q5 and Q8 in the same category (43% and 45% correct respectively) by the "Traditional" method. Using the qualifier, we find a substantially lower confidence in the Q5 answers (33% guessing) when compared to Q8 (12% guessing). Additionally, a question like that represented in Q9 may require additional time since 10% of the respondents indicated they were guessing but got the correct answer on the post-test.

Another factor that can be evaluated with the qualifier method is the number of participants that have an incorrect answer but indicated they were not guessing at the answer. Table 2 highlights the results of this comparison. In Table 2, several questions generated a high incorrect but "Knew" response in pre-testing but a significantly lower percentage for the post-testing. It could be argued that correcting incorrect knowledge would be a more important result of the workshop than just the increased percentage of correct responses. The qualifier also gives feedback on the teaching method. Concepts in Q7 generated a 64% Correct "Knew" (Table 1) and 28% "incorrect selecting knew" (Table 2) on the post-test. It could be concluded that the teaching method needed to be changed because 28% of the participants did not learn the concepts being taught related to this question.

Table 2.
Summary by Question of Incorrect Responses When the Participant Indicated They "Knew" the Answer Was Correct

Question% Incorrect Selecting "Knew"% Incorrect Selecting "Knew"


Alliger, G. M., & Horowitz, H. M. (1989). IBM takes the guessing out of testing. Training and Development Journal, 43(4):69-73.