August 2017
|
August 2017 // Volume 55 // Number 4 // Tools of the Trade // v55-4tt1
Public Value Posters: Conveying Societal Benefits of Extension Programs Through Evaluation Evidence
Abstract
The public value poster session is a new tool for effectively demonstrating and reporting the public value of Extension programming. Akin to the research posters that have long played a critical role in the sharing of findings from academic studies, the public value poster provides a consistent format for conveying the benefits to society of Extension programs and resources. This article provides background on the creation of a public value poster rubric and the implementation of an inaugural public value poster session. This type of session holds enormous potential for building capacity to link program evaluation with public value messaging.
Effectively demonstrating and reporting the public value of programming is one way Extension can strengthen public support and sustain critical government funding for operations. As lamented by Franz, Arnold, and Baughman (2014), "Extension needs new strategies, methods, and partners to measure public value and to answer 'so what?' about programs, but the organization lacks strong capacity and leadership to change practice" ("The Connection Between Evaluation and Public Value," para. 2)
This article highlights one new strategy for moving things forward in this realm: the public value poster. Akin to the research posters that have long played a critical role in the sharing of findings from academic studies, the public value poster provides a consistent format for conveying the benefits to society of Extension programs and resources. The article provides background on the creation of a public value poster rubric and the implementation of an inaugural public value poster session at University of Minnesota's fall program conference. This type of session holds enormous potential for sustainably building capacity to link program evaluation with public value messaging.
Background
Thanks largely to the pioneering work of Moore (1995), Kalambokidis (2004, 2011), and Franz (2011, 2013), the public value concept is well-known in Extension (see also Carroll, Dinstel, & Manton, 2015; Haskell & Morse, 2015; Majee & Maltsberger, 2013). Training has been provided in numerous states on how to write public value statements for programs and how to create public value stories and communicate public value to stakeholders.
Building on this pioneering work, Chazdon and Paine (2014) published their evaluating for public value framework to specify that four distinct dimensions of evaluative data are needed to demonstrate public value (see Figure 1):
- data on the "publicness" of the participant and the participant's goals;
- data on organizational credibility, which incorporates participant and stakeholder perceptions of program quality, as well as the reputation of the delivery organization;
- data on program outcomes, with an emphasis on the value gained by program participants; and
- data on broader impacts, with an emphasis on changes in conditions beyond direct effects on participants (i.e., community- or population-level impacts, systems-level changes).
Figure 1.
Evaluating for Public Value Framework
From: "Evaluating for public value: Clarifying the relationship between public value
and program evaluation," by S. Chazdon and N. Paine, 2014, Journal of Human
Sciences and Extension, 2(2), p. 106. Copyright [2014]
by Journal of Human
Sciences and Extension. Reprinted with permission.
Developing a Rubric
In 2014, University of Minnesota Extension's evaluation team led a project, the Central Region Project, to elicit impact narratives about Extension programs. Based initially on the most significant change method (Dart & Davies, 2003), the project was intended to help staff and stakeholders articulate changes that occur because of Extension programming. The team developed a rubric to review and score narratives on the basis of their strength in communicating significant change.
As a result of the Central Region Project (Chazdon, 2016), and building on the evaluating for public value framework, the team created a rubric that specifically focuses on public value in impact narratives.
The Rubric and Poster Session
Consistent with the evaluating for public value framework, the rubric features four dimensions:
- target audience,
- why Extension,
- behavior or action outcomes, and
- broader impacts.
The public value poster rubric is displayed in Figure 2. To better fit with Extension's existing culture of using poster presentations rather than impact narratives, we created a public value poster session as part of the University of Minnesota Extension's fall program conference. Extension faculty were invited to submit proposals for traditional research posters or public value posters. Those selected to create public value posters were given additional training on the rubric and the poster format and connected to evaluators well in advance of the poster session to strategize about the evaluation evidence needed to strengthen their posters.
Figure 2.
Public Value Poster Rubric
Finding Evidence
Authors of public value posters need to collect evaluation evidence from stakeholders outside Extension to effectively support claims about the dimensions of public value. But this effort does not need to be overly complex or burdensome. The recently published Impact Indicators Tips Booklet (Morse, French, & Chazdon, 2016), which is available at http://aese.psu.edu/nercrd/impacts/impact-indicators-tips-booklet, describes methods and tips for using the "but for . . ." approach for checking with stakeholders about impacts that would not have occurred if not for Extension programs (see especially pages 18–20 of the tips booklet). For example, follow-up interviews are a perfect way to engage student assistants or interns.
Ripple effects mapping is another approachable tool for collecting stakeholder feedback on the public value of Extension programs (Emery, Higgins, Chazdon, & Hansen, 2015; Hansen Kollock, Flage, Chazdon, Paine, & Higgins, 2012; Vitcenda, 2014; Welborn et al., 2016). Other methods—for example, calculation of return on investment, social network analysis, and narrative methods, such as most significant change—although more labor intensive are important tools that can demonstrate broader impacts beyond those directly affecting program participants.
Lessons Learned from the Inaugural Poster Session
The inaugural public value poster session was held in October 2016. Fifteen public value posters were presented, and attendees were asked to review the posters on the basis of the four elements of the rubric. The participant review was intended to be more of an engagement strategy to get participants to think about public value messaging than a measurement strategy, but the information gleaned from the reviews was instructive. Table 1 shows the results. Attendees were positive about the posters, with average ratings at about 4, in the very good range. Notably, the weakest ratings and widest variation in ratings were for measurement of behavior change.
Public value rubric item | Average rating | SD |
Target audience | 4.3 | 0.7 |
Why Extension? | 4.3 | 0.9 |
Broader impacts | 4.2 | 0.8 |
Behavior or action outcomes | 4.0 | 1.0 |
Note. Table based on 21 responses. Means based on a rating scale of 1 (poor), 2 (fair), 3 (good), 4 (very good), and 5 (excellent). |
The team's key takeaways include the following points:
- The consistency of the public value poster rubric offers structure for considering the impact of Extension programming that can be applied across various programming.
- Developing a public value poster is a structured way to apply and build familiarity with the evaluating for public value framework.
- Gathering evidence of behavior change and broader impacts is an area for capacity building. It need not be prohibitive. Participant surveys, key informant interviews, and other accessible data collection methods can be used, and poster presenters need to collect and present these data.
- Fitting the public value narrative into a poster format promotes brevity. However, it can be challenging to adequately address each category in the rubric with such brevity (i.e., avoiding a poster that is too "text-dense").
- Integrating both quantitative and qualitative data provides substantive ("hard") evidence of a program's public value as well as testimonials that bring that value to life.
We highly recommend the use of this approach in other states as well as at conferences of Extension professional associations. Using public value posters in Extension work is one way to build capacity for linking evaluation with public value and further demonstrating the value that Extension brings to communities.
Acknowledgments
The full University of Minnesota Extension evaluation team includes lead evaluators from each of Extension's four programmatic centers and the Regional Sustainable Development Partnerships. Emily Becher (Center for Family Development) and Samantha Grant (Center for Youth Development) participated in the public value poster session and reviewed this article. We also wish to thank the lead organizers of the fall program conference poster session: Brent Hales, Mary Ann Hennen, Trisha Sheehan, Mary Jo Katras, Elizabeth Templin, and Deb Zak.
References
Carroll, J. B., Dinstel, R. R., & Manton, L. M. (2015). Writing panels articulate Extension public value in the West. Journal of Extension, 53(6), Article 6TOT1. Available at: http://www.joe.org/joe/2015december/tt1.php
Chazdon, S. A. (2016, January 20). Worthy and effective public value narratives [eXtension Evaluation Community Blog post]. Retrieved from https://blogs.extension.org/evalcop/2016/01/20/worthy-and-effective-public-value-narratives/
Chazdon, S. A., & Paine, N. (2014). Evaluating for public value: Clarifying the relationship between public value and program evaluation. Journal of Human Sciences and Extension, 2(2), 100–119. Retrieved from http://media.wix.com/ugd/c8fe6e_8b2458db408640e580cfbeb5f8c339ca.pdf
Dart, J., & Davies, R. (2003). A dialogical, story-based evaluation tool: The most significant change technique. American Journal of Evaluation, 24(2), 137–155.
Emery, M., Higgins, L., Chazdon, S., & Hansen, D. (2015). Using ripple effect mapping to evaluate program impact: Choosing or combining the methods that work best for you. Journal of Extension, 53(2), Article 2TOT1. Available at: http://www.joe.org/joe/2015april/tt1.php
Franz, N. K. (2011). Advancing the public value movement: Sustaining Extension during tough times. Journal of Extension, 49(2), Article 2COM2. Available at: http://www.joe.org/joe/2011april/comm2.php
Franz, N. K. (2013). Improving Extension programs: Putting public value stories and statements to work. Journal of Extension, 51(3), Article 3TOT1. Available at: http://www.joe.org/joe/2013june/tt1.php
Franz, N. K., Arnold, M., & Baughman, S. (2014). The role of evaluation in determining the public value of Extension. Journal of Extension, 52(4), Article 4COM3. Available at: http://www.joe.org/joe/2014august/comm3.php
Hansen Kollock, D., Flage, L., Chazdon, S., Paine, N., & Higgins, L. (2012). Ripple effect mapping: A "radiant" way to capture program impacts. Journal of Extension, 50(5), Article 5TOT6. Available at: http://www.joe.org/joe/2012october/tt6.php
Haskell, J. E., & Morse, G. W. (2015). What is your library worth? Extension uses public value workshops in communities. Journal of Extension, 53(2), Article 2FEA1. Available at: http://www.joe.org/joe/2015april/a1.php
Kalambokidis, L. (2004). Identifying the public value in Extension programs. Journal of Extension, 42(2), Article 2FEA1. Available at: http://www.joe.org/joe/2004april/a1.php
Kalambokidis, L. (2011). Spreading the word about Extension's public value. Journal of Extension, 49(2), Article 2FEA1. Available at: http://www.joe.org/joe/2011april/a1.php
Majee, W., & Maltsberger, B. A. (2013). Unlocking public value: An evaluation of the impact of Missouri's Great Northwest Day at the Capitol program. Journal of Extension, 51(4), Article 4FEA10. Available at: http://www.joe.org/joe/2013august/a10.php
Moore, M. (1995). Creating public value: Strategic management in government. Cambridge, MA: Harvard University Press.
Morse, G., French, C., & Chazdon, S. (2016). The impact indicators tips booklet. Retrieved from http://aese.psu.edu/nercrd/impacts/impact-indicators-tips-booklet
Vitcenda, M. (2014). Ripple effect mapping makes waves in the world of evaluation. University of Minnesota Extension. Retrieved from http://www.extension.umn.edu/community/news/ripple-effect-mapping-making-waves-in-evaluation/
Welborn, R., Downey, L., Dyk, P. H., Monroe, P. A., Tyler-Mackey, C., & Worthy, S. L. (2016). Turning the tide on poverty: Documenting impacts through ripple effect mapping. Community Development, 47(3), 385–402.