The Journal of Extension - www.joe.org

April 2019 // Volume 57 // Number 2 // Research In Brief // v57-2rb8

Using Time to Assess Extension Exhibits

Abstract
In addition to other evaluation methods, the amount of time visitors spend at an exhibit may indicate visitors' level of interest and engagement with the exhibit content. We describe methods from the museum field where time is used as a measurement of exhibit effectiveness and discuss findings from a study in which we used time to evaluate an Extension exhibit. This information has implications for Extension professionals interested in using time as a method of evaluating visitors' level of interest and engagement in their exhibits.


Jeffrey Rollins
Assistant Department Head; Manager, Exhibits and Events
Department of Agricultural Communications
jjrollin@purdue.edu

Sunnie Watson
Assistant Professor of Learning Design and Technology
Department of Curriculum and Instruction
sunnieleewatson@purdue.edu

Purdue University
West Lafayette, Indiana

Introduction

Extension has delivered research-based information to the public since the beginning of the 20th century (Purdue University Agricultural Experiment Station, 1912). County Extension agents and campus-based specialists deliver information on a wide variety of topics to farmers, consumers, business owners, homeowners, young people, and families. In 1912, Indiana's Purdue Extension presented exhibits at 25 county fairs and the Indiana State Fair and used three freight trains to deliver exhibits about wheat improvement and livestock (Purdue University Agricultural Experiment Station, 1912). The delivery methods, scope, and sophistication of Extension exhibits have changed since then. An example of an Extension exhibit developed with deliberation and intentionality is A Salamander Tale (Rollins & Watson, 2017). A Salamander Tale is based on learning theories and models and was designed for a specific audience (Rollins & Watson, 2017). These traits make it similar to museum exhibits (Danilov, 1986; Houting, Taylor, & Watts, 2010).

For measuring the effectiveness of Extension exhibits at a state or county fair or other informal setting, Extension professionals must look outside their field for ideas that will improve their evaluation methods. In this article, we present a review of the exhibit evaluation literature, briefly discuss methods from the museum field related to using time and holding power as measurements of exhibit effectiveness, and discuss a study of an Extension exhibit in which we used time to assess holding power.

Assessing Extension Exhibits

In past evaluations of Extension exhibits, researchers have used logic models to guide evaluation design (e.g., McCurdy et al., 2010). The 4-H Science logic model was introduced in 2007 and updated in 2010 and includes a section focused on outcomes such as improved attitudes, increased awareness, and improved science skills and knowledge (National 4-H Council, 2010). The National 4-H Common Measures drive assessment a step further with scales designed to measure attitude, skill, interest, and application (Lewis, Horrillo, Widaman, Worker, & Trzesniewski, 2015). Others have used quantitative methods for analyzing exhibit evaluation data but included data collected in a formal classroom setting (Carrozzino & Smith, 2008).

Museum Evaluation Practices

In museum settings, the focus of evaluation is generally directed at learning or engagement. The effectiveness of museum exhibits can be measured in a variety of ways. Some commonly used measures relate to cognitive change, problem solving, motivation, and creativity (Donald, 1991). Some studies tie interest and cognitive change resulting from engagement with museum exhibits to emotion (Dahl, Entner, Johansen, & Vittersø, 2013). When visitors find an exhibit pleasurable, interest increases. Dahl et al. (2013) used a 17-question survey to understand how visitors' overall interest in an exhibit was related to ease of comprehension, cohesion, vividness, engagement, emotiveness, and prior knowledge.

Other authors have categorized measures differently. According to Diamond, Luke, and Uttal (2009), evaluation measures fall into the categories of knowledge retention, implicit memory, conceptual change, task analysis using think-aloud protocols, and visual-spatial memory. Researchers who conducted a study at the Lawrence Hall of Science on the campus of the University of California, Berkeley defined exhibit effectiveness as "measurable transmission of information about scientific principles from the exhibits to visitors" (Eason & Linn, 1976, p. 46).

Regardless of the measure used, situational constraints can make evaluation problematic. Finding necessary time, human resources, and funding can make even simple studies challenging (Bamberger, Rugh, & Mabry, 2012). The resources required to gain consent and survey minors can add an additional challenge to the process of evaluating exhibits. On the other hand, measuring time spent at an exhibit or engaged with exhibit elements is unobtrusive and can reduce the barriers to exhibit evaluation.

Time as a Measure of Exhibit Effectiveness

Time as an Unobtrusive Method of Assessment

Using time to measure exhibit effectiveness is an unobtrusive method for assessing interest, motivation, and cognition (Falk, 1983). Time is considered an unobtrusive measure because the researcher need not interact with the subjects to record the data. Through direct observation or indirect observation (via video cameras), researchers may measure the time visitors spend in an exhibit space or engaged with specific exhibit features (Sanford, 2010).

Time and Visitor Behavior

Various independent studies have shown that visits to individual exhibits usually last between 30 s and 90 s regardless of the exhibit type and setting. Bitgood, Patterson, and Benefield (1988) found that visitors to live animal exhibits at zoos averaged about 1 min when the animals were active and about 30 s when the animals were inactive. Boisvert and Slez (1994) noted that whether exhibits involved high or low interactivity, conveyed concrete or abstract concepts, or were simple or complex in their presentation, the average time spent at an exhibit was about 1 min. Falk (1983) too observed average visit times of 1 min, among 123 visitors, but also found that visitors who tried to see as many exhibits as possible in a museum spent less time at each exhibit. Sandifer (1997) found that the time spent at an exhibit was nearly the same whether the museum visit occurred on a weekday or a weekend, with an average of 1.4 min and 1.3 min, respectively. The preceding examples lead to the conclusion that museum visitors typically spend very little time at each exhibit.

Holding Power

Holding power is another value used to measure exhibit effectiveness. Holding power is defined as the amount of time spent at an exhibit divided by the minimum amount of time it takes to read any text and interact with any hands-on activities (Donald, 1991; Peart, 1984). Peart's 1984 study of holding power categorized exhibits as more concrete or more abstract. Exhibits that consisted mostly of written or spoken text were considered more abstract, and exhibits with more sensory involvement, such as sound, simulations, hands-on interactive elements, or artifacts, were considered more concrete. Using a questionnaire to measure changes in knowledge and attitude and visitor tracking to assess time and engagement, Peart found that concrete exhibits produced more gains in knowledge and higher holding power. Boisvert and Slez (1994) found that highly interactive, concrete exhibits had a holding power more than 50% higher than that of any other exhibit type in their study.

Purpose and Methods

Applying museum exhibit evaluation methods, we undertook a study to evaluate effectiveness of the Extension exhibit What's Bugging Belva? The specific purpose of the study was to assess the holding power of the exhibit.

Participants and Setting

Participants were visitors to a free outdoor science festival held on Purdue University's campus. The exhibit was housed in a 20-by-20-ft tent. The tent had only one doorway through which visitors entered and exited.

Exhibit Description

What's Bugging Belva? is a 400 ft2 exhibit about insects. The exhibit was developed by Purdue University's Exhibit Design Center. The exhibit is presented as a children's book, with an introductory panel and four stations. The first station discusses monarch butterflies, the second station discusses dragonflies, the third station discusses burying beetles, and the fourth station defines true bugs. Stations 1 through 3 have hands-on interactive elements consisting of small panels visitors lift to find more information about insects. Station 4 includes a hand-operated knob that extends an insect's proboscis. The exhibit furniture is large and colorful. This trait, combined with the interactivity at each station, places What's Bugging Belva? in the category of highly interactive, concrete exhibits as described by Boisvert and Slez (1994).

Data Collection

A camera mounted above the doorway had a field of view encompassing the entire exhibit space. Signs outside and inside the tent informed visitors that they were being recorded on video for research that would be used to improve future exhibits. We reviewed 2 hr of video and recorded the times distinct groups of visitors spent at the exhibit, beginning when the first group member walked into the space and ending when the last group member left the space. The number of visitors in each group was recorded, and if children were present in a group, the number of children was recorded.

Results and Discussion

During the 2-hr period, 112 groups moved through the space. In total, the groups comprised 352 individuals, of which 166 were children. The average time spent viewing and interacting with the exhibit was 124 s. The minimum amount of time it takes to read any text and interact with any hands-on activities within the exhibit is 185 s. By applying the formula for holding power—the amount of time spent at an exhibit divided by the minimum amount of time it takes to read any text and interact with any hands-on activities (Donald, 1991; Peart, 1984)—we calculated the holding power of What's Bugging Belva? as .67. What's Bugging Belva? meets the definition of a simple, concrete exhibit with high interaction (Boisvert & Slez, 1994; Peart, 1984). Exhibits studied by Boisvert and Slez (1994) that had similar characteristics had an average holding power of .47. Exhibits studied by Peart (1984) that had similar characteristics had an average holding power of .69. The holding power of What's Bugging Belva? compares favorably with the exhibits in these studies of museum exhibits and indicates that the exhibit's design was well executed.

Limitations of Using Time as a Measure of Exhibit Effectiveness

Time as a measure of exhibit effectiveness is easy to record but requires an understanding of the myriad factors that may influence how exhibit visitors behave. For example, group size and visitors' scheduled activities beyond a museum visit can contribute to length of stay at a museum exhibit (Falk, 1982; Sandifer, 1997). Other conditions not directly connected to the exhibit design or interactivity also can influence time spent at an exhibit. For instance, studies of visitors' traffic patterns show that a museum layout can force visitors to spend more or less time in certain areas (Klein, 1993). Also, whereas day of the week does not seem to make a significant difference in time spent at museum exhibits, the point in time during a museum visit at which one arrives at an exhibit does make a difference; toward the end of the time allotted for the museum visit, less time is usually spent with each exhibit (Bohnert & Zukerman, 2014; Sandifer, 1997).

Conclusions

The exhibits discussed in this article are three-dimensional exhibits that visitors view by walking through and around the exhibit furniture. However, the methods discussed for measuring time and holding power also could apply to poster presentations or tabletop displays. Calculating the holding power of individual posters, displays, and exhibits could be useful for comparing different styles and approaches for presenting content. For instance, 4-H participants could use holding power to assess poster projects and use the results to improve future presentations. Measuring the time visitors spend at an exhibit may provide Extension professionals with data that indicate interest, potential knowledge gain, or the possibility of the application of the exhibit content beyond the exhibit visit.

However, any assessment of time may be more effective if combined with at least one other measurement to establish a meaningful correlation. For example, combining the measurement of time with a simple, kiosk-based survey containing questions about a specific exhibit station would provide a correlation between time and reported interest. Extension evaluators could observe visitors in aggregate for a period of time while running the kiosk-based quiz simultaneously. Once the observation is concluded, the kiosk could be shut down. The Extension evaluators would then have the two measures to compare to determine the relationship between time spent at the exhibit and interest level. The same methodology could then be used for collecting data on a different exhibit station. By using this approach, evaluators could discover patterns that may indicate which exhibit characteristics are more effective than others. Ideally, evaluators would involve more than one additional source of data. Sanford's (2010) research on three different behaviors is an example of a type of study that would not be overly complicated and could involve collecting data on visitors' time at an exhibit, the exhibit's holding power, and visitors' reported interest in the exhibit. If performed using a stand-alone kiosk, such a study would provide a clearer picture of what is happening in the exhibit environment while requiring limited human and material resources. The bottom line is that adapting techniques from the field of museum exhibit evaluation may be a way for Extension professionals to expand and improve their evaluation efforts.

References

Bamberger, M., Rugh, J., & Mabry, L. (2012). Realworld evaluation: Working under budget, time, data, and political constraints (2nd ed). Thousand Oaks, CA: SAGE.

Bitgood, S., Patterson, D., & Benefield, A. (1988). Exhibit design and visitor behavior: Empirical relationships. Environment and Behavior, 20(4), 474.

Bohnert, F., & Zukerman, I. (2014). Personalised viewing-time prediction in museums. User Modeling and User-Adapted Interaction, 24(4), 263–314.

Boisvert, D. L., & Slez, B. J. (1994). The relationship between visitor characteristics and learning-associated behaviors in a science museum discovery space. Science Education, 78(2), 137–148.

Carrozzino, A. L., & Smith, S. S. (2008). Evaluation of a wildlife education exhibit for youth. Journal of Extension, 46(4), Article 4RIB6. Available at: https://www.joe.org/joe/2008august/rb6.php

Dahl, T., Entner, P., Johansen, A., & Vittersø, J. (2013). Is our fascination with museum displays more about what we think or how we feel? Visitor Studies, 16(2), 160–180.

Danilov, V. (1986). Discovery rooms and kidspaces: Museum exhibits for children. Science and Children, 23(4), 6–11.

Diamond, J., Luke, J., & Uttal, D. (2009). Practical evaluation guide: Tool for museums and other informal educational settings (2nd ed). American Association for State and Local History book series. Lanham, MD: AltaMira Press.

Donald, J. G. (1991). The measurement of learning in the museum. Canadian Journal of Education, 16(3), 371–382.

Eason, L., & Linn, M. (1976). Evaluation of the effectiveness of participatory exhibits. Curator: The Museum Journal, 19(1), 45–62.

Falk, J. (1982). The use of time as a measure of visitor behavior and exhibit effectiveness. Roundtable Reports, 7(4), 10–13.

Falk, J. (1983). Time and behavior as predictors of learning. Science Education, 67(2), 267–276.

Houting, B. A., Taylor, M. J., & Watts, S. (2010). Learning theory in the museum setting. In K. Fortney & B. Sheppard (Eds.), An alliance of spirit: Museum and school partnerships (23–30). Lanham, MD: Rowman and Littlefield.

Klein, H. (1993). Tracking visitor circulation in museum settings. Environment and Behavior, 25(6), 782–801.

Lewis, K. M., Horrillo, S. J., Widaman, K., Worker, S. M., & Trzesniewski, K. (2015). National 4-H Common Measures: Initial valuation from California 4-H. Journal of Extension, 53(2), Article 2RIB3. Available at: https://www.joe.org/joe/2015april/rb3.php

McCurdy, S. M., Johnson, S., Hampton, C., Peutz, J., Sant, L., & Wittman, G. (2010). Ready-to-go exhibits expand consumer food safety knowledge and action. Journal of Extension, 48(5), Article 5TOT10. Available at: https://joe.org/joe/2010october/tt10.php

National 4-H Council. (2010, November). 4-H Science logic model. Retrieved from https://4-h.org/wp-content/uploads/2016/02/4-H-Science-Logic-Model.pdf

Peart, B. (1984). Impact of exhibit type on knowledge gain, attitudes, and behavior. Curator: The Museum Journal, 27(3), 220–237.

Purdue University Agricultural Experiment Station. (1912). Annual report of the agricultural experiment station, Lafayette, Indiana. West Lafayette, IN: Purdue University Press.

Rollins, J., & Watson, S. L. (2017). A Salamander Tale: Effective exhibits and attitude change. Journal of Extension, 55(3), Article 3RIB2. Available at: https://www.joe.org/joe/2017june/rb2.php

Sandifer, C. (1997). Time-based behaviors at an interactive science museum: Exploring the differences between weekday/weekend and family/nonfamily visitors. Science Education, 81(6), 689–701.

Sanford, C. W. (2010). Evaluating family interactions to inform exhibit design: Comparing three different learning behaviors in a museum setting. Visitor Studies, 13(1), 67–90.