The Journal of Extension - www.joe.org

February 2017 // Volume 55 // Number 1 // Tools of the Trade // v55-1tt1

Evaluate Naturally and Quickly with Just-in-Time Program Evaluation

Abstract
A just-in-time evaluation approach can help Extension professionals expand their program evaluation readiness, interest, and competence. A pilot test of this efficient approach helped organizers of a forest farming conference determine important information about the event's processes and content to be better positioned for future work. The conference evaluation team and conference participants enjoyed this approach that integrated evaluation methods and evaluative thinking into all the conference activities. This approach to evaluation is especially appropriate for gathering real-time data for determining the value of educational processes taking place while making evaluation more visible and easier to accomplish.


Nancy K. Franz
Professor Emerita
School of Education
Iowa State University
Ames, Iowa
nfranz@iastate.edu

John F. Munsell
Associate Professor
Department of Forest Resources and Environmental Conservation
Virginia Polytechnic Institute and State University
Blacksburg, Virginia
jfmunsel@vt.edu

Tiffany N. Brown
Extension Project Assistant
Department of Forest Resources and Environmental Conservation
Virginia Polytechnic Institute and State University
Blacksburg, Virginia
tiffany.brown@vt.edu

Holly K. Chittum
Extension Project Associate
Department of Forest Resources and Environmental Conservation
Virginia Polytechnic Institute and State University
Blacksburg, Virginia
hollykc@vt.edu

Evaluation enhances successful Extension programs and organizational sustainability (Braverman, Engle, Arnold, & Rennekamp, 2008). Over Extension's history, program evaluation has included implementing experimental designs; demonstrating results; measuring outcomes; and assessing operations, accountability, and collaboration (Nichols, Blake, Chazdon, & Radhakrishna, 2015). In spite of this rich history and the strong need for program evaluation, individuals, teams, program units, and administrators have been slow to integrate program evaluation and evaluative thinking into all aspects of program and organizational development (Franz & McCann, 2007; Rennekamp & Arnold, 2009). As public support for Cooperative Extension and Extension programs declines and programming environments become more complex, Extension staff and partners need new efficient evaluation approaches that improve program quality, public perception, and organizational development (Franz, 2011, 2013, 2014, 2015).

Just-in-Time Evaluation

A just-in-time evaluation can help Extension professionals more fully embrace evaluative thinking and improve evaluation competency. The approach requires little preparation, is integrated into the program being evaluated, and requires minimal follow-up. An evaluation leader works with a team of program participants or stakeholders to plan and conduct the evaluation during the program and report the evaluation results before its conclusion. Costs for this method are usually low, with the largest investment being time given by evaluation team members and program participants.

A Forest Farming Conference Just-in-Time Evaluation

As leaders for a forest farming conference, we developed just-in-time evaluation, a term we created, to assess the Forest Farming to the Forefront: Developing and Promoting Roadmaps for the Northeastern United States conference. The 2-day event for academics and practitioners was conducted at the Cornell University Arnot Teaching and Research Forest. The readiness of the conference planners and the informal nature of the audience, event, and facility aligned well with the just-in-time method.

Prior to the conference, we reviewed the purposes of the event and the evaluation; discussed potential methods, supplies, and arrangements needed to implement the evaluation method; and identified potential evaluation team members. We then incorporated evaluative moments into the agenda, including sharing of the evaluation plan during the event kickoff session, updates during breaks and meals, and evaluation summaries at the end of the first day and during the closing session. This schedule of evaluative moments required writing the evaluation report as the conference and evaluation took place. The evaluation leader met several times each day with the evaluation team to collect and analyze data and discuss next steps in the evaluation process.

An evaluation team of four members was established at the outset of the event. They were selected on the basis of their interest in the event content and the evaluation process and their readiness to be on the team. The group included an external evaluator, the principal investigator, and two forestry agency staff. The team met before the opening conference session to get acquainted, review the purpose of the evaluation, select evaluation methods, and assign evaluation tasks. The group collected data by using listening posts, focus groups, and observations of groups.

The evaluation (a) determined to what degree the event enhanced forest farming community and stakeholder relationships; (b) identified strategies for overcoming forest farming challenges and capitalizing on opportunities; (c) developed a brief roadmap, with information including printed and web-based articles; and (d) assessed the process and lessons learned for future forest farming meetings.

Evaluation Findings

Multiple benefits of the conference were revealed, including networking and developing new relationships with key leaders, innovators, and noteworthy professional; substantive discussions addressing forest farming issues and practices; advancing research; building a stronger foundation for selecting products; expanding understanding beyond one forest-farmed product; gaining support, guidance, feedback, and wisdom on forest farming; and expanding networks. Participants said, "I feel privileged to be a part of this group," and "this was a historic, mile-marker–setting, future-direction event."

The evaluation also documented common issues discussed across work groups, indicators of high-functioning work groups, and recommendations for future forest farming meetings. The day after the conference, the evaluation leader provided an eight-page report to the principal investigator that included identification of the evaluation team; descriptions of the evaluation purpose, methods, and findings; and the raw data.

Lessons Learned About Just-in-Time Evaluation

The just-in-time evaluation was a new endeavor that we created to achieve an efficient, holistic, integrated, and cost-effective approach to evaluation with immediate reporting of results. Several lessons were learned from the pilot-testing process to consider for future use.

  • Evaluation team member participation should be voluntary, with members permitted to join or leave the team at any time to reduce pressure to be an evaluation expert or discomfort caused by new tasks.
  • The evaluation purposes, methods, and reporting should be simple because minimal time is available to plan, implement, and report findings.
  • Integrating evaluation reports into the educational program is critical for informed participation.
  • The evaluation report can be useful for populating grant proposals and progress reports.
  • The evaluation team members need to be flexible and open to new ways of participating in an educational program.
  • The approach works well for baseline, one-point-in-time, or formative/process evaluation in which data are available in real time. It would be less effective for the gathering of summative or impact data, which requires measuring results over a period of time.
  • The flexibility of the approach can serve more than the original purposes of the evaluation. In our case, for example, a conference participant asked the evaluation team to collect data about the conference facility for conversations with university administrators.

Summary

A just-in-time evaluation can help Extension professionals expand their program evaluation readiness, interest, and competence. Our pilot test helped organizers of a forest farming conference determine important information about the program's processes and content to be better positioned for future work. The conference evaluation team and conference participants benefited from integration of evaluation methods and evaluative thinking into all activities. Just-in-time evaluation is especially appropriate for gathering real-time data to determine the value of educational processes taking place while making evaluation more visible and easier to accomplish. This type of evaluation also provides important efficiencies in evaluation planning, implementation, reporting, and funding.

References

Braverman, M., Engle, M., Arnold, M., & Rennekamp, R. (2008). Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120.

Franz, N. (2011). Advancing the public value movement: Sustaining Extension during tough times. Journal of Extension, 49(2) Article 2COM2. Available at: https://www.joe.org/joe/2011april/comm2.php

Franz, N. (2013). Improving Extension programs: Putting public value stories and statements to work. Journal of Extension, 51(3) Article 3TOT1. Available at: https://www.joe.org/joe/2013june/tt1.php

Franz, N. (2014). Measuring and articulating the value of community engagement: Lessons learned from 100 years of Cooperative Extension work. Journal of Higher Education Outreach and Engagement, 18(2), 5–18. Retrieved from http://openjournals.libs.uga.edu/index.php/jheoe/article/view/1231/759

Franz, N. (2015). Programming for the public good: Ensuring public value through the Cooperative Extension program development model. Journal of Human Sciences and Extension, 3(2), 13–25.

Franz, N., & McCann, M. (2007). Reporting program impacts: Slaying the dragon of resistance. Journal of Extension, 45(6) Article 6TOT1. Available at: https://www.joe.org/joe/2007december/tt1.php

Nichols, A., Blake, S., Chazdon, S., & Radhakrishna, R. (2015). From farm results demonstrations to multistate impact designs: Cooperative Extension navigates its way through evaluation pathways. Journal of Human Sciences and Extension, 3(2), 83–107.

Rennekamp, R., & Arnold, M. (2009). What progress, program evaluation? Reflections on a quarter century of Extension evaluation practice. Journal of Extension, 47(3) Article 3COM1. Available at: https://www.joe.org/joe/2009june/comm1.php