The Journal of Extension -

June 2010 // Volume 48 // Number 3 // Feature // v48-3a6

Measuring Outcomes of Extension Conferences: A Case Study of the National Extension Tourism Conference

What is the value of participation in national Extension conferences? As travel funds for Extension educators remain in short supply and conference sponsors demand measurable outcomes, research is needed to document impacts of national conferences on Extension programming. Using the 2006 National Extension Tourism Conference as a case study, we conducted an evaluation on-site and 6 months after the conference. Results indicate that 92% of respondents had improved their Extension programs as a direct result of the conference. Findings from the study suggest that impacts are substantial and that supporting national conferences is a worthwhile investment by Extension.

Lisa Chase
University of Vermont Extension
Brattleboro, Vermont

Diane Kuehn
Assistant Professor
State University of New York
College of Environmental Science and Forestry
Syracuse, New York


Do national conferences contribute to improvements in Extension programming, or are they merely budget expenses without significant benefits? As we began planning for the 2006 National Extension Tourism Conference, we learned that some potential sponsors suspected the latter and declined to provide funding because of the lack of documented outcomes attributable to Extension conferences. Now more than ever, travel funds for Extension personnel are in short supply. Conference attendees and their supervisors must decide whether sending Extension educators to national conferences is a worthwhile investment. To help potential sponsors, conference attendees, and Extension administrators make informed decisions, research is needed that examines the value of participation in national Extension conferences.

Ample literature examines the value of regional workshops and other training sessions for Extension audiences (Nagler, Bastian, Hewlett, & Weigel, 2007; Barrett, Swanson, & Song, 2005; Earnest, 1996); however similar studies evaluating improvements to Extension programming as a direct result of participation in national Extension conferences are not available. Is this lack of literature due to the insignificance of the impacts, the difficulty involved in measuring these impacts, or simply a shortage of published evaluation results for national conferences? The study reported here addressed these questions by identifying how information and networking opportunities provided at a national Extension conference were applied to programming efforts by attendees during the 6 months following the conference.

The goal of the study was to assess short- and medium-term outcomes of the 2006 National Extension Tourism Conference (NET2006) through use of an on-site evaluation and a post-conference Web evaluation completed 6 months after the conference. The article begins with an explanation of the rationale for this research and discusses how the rationale relates to the logic conceptual framework (Wholey, 1979; Taylor-Powell & Henert, 2008). Next, we describe our case study and our evaluation methods. We then present results regarding measurable outcomes and conclude with implications for evaluating Extension conferences.

Conceptual Framework

The need for evaluation of Extension programs has been previously noted in the literature (Radhakrishna & Martin, 1999), and efforts to improve evaluations of Extension programs have yielded several studies measuring the effectiveness of programs (e.g., Guion, Turner, & Wise, 2004; Scott, Reed, Kubena, & McIntosh, 2007). Improvements in Extension's evaluations are still needed (Rennekamp & Arnold, 2009). Evaluations of the impacts of national conferences on Extension programming are noticeably lacking in the literature, and questions exist regarding the benefits of national conferences for Extension programs. To address these concerns, we designed a detailed research protocol to measure outcomes of the 2006 National Extension Tourism Conference.

The logic model, originally developed by Joseph Wholey (1979), served as the conceptual framework for the study. Since its development by The Urban Institute, the logic model has been used widely to evaluate programs in government and non-profit sectors. Many state Extension systems have adapted the logic model for use in evaluating programs (Arnold, 2002). University of Wisconsin Extension, for example, has published a guide for using logic models (Taylor-Powell & Henert, 2008) and provides numerous examples on their Web site <>. The Generic Logic Model for CSREES Reporting <> serves as a basic template used by many state Extension services and was used for the purposes of the study.

Application of the logic model for evaluating the 2006 National Extension Tourism Conference is illustrated in Figure 1. The situation prior to NET2006 was one of limited travel budgets for Extension educators. Extension administrators needed to be convinced of the value of national conferences in order to approve travel expenses, and Extension educators needed to know that their programs would benefit through their participation in the conference. In addition, potential sponsors needed proof that the conference would lead to measurable outcomes.

Inputs included time and money. Conference organizers spent 1 year planning the event. Extension educators spent time at the conference, away from their offices, programs, and audiences. Money (spent by attendees on travel, accommodations, and registration fees) went towards paying for conference facilities, speakers, and special events such as field trips.

Activities at NET2006 included educational workshops in the form of concurrent sessions, keynote speakers of national prominence, networking opportunities with colleagues, and a range of field trips to help attendees learn about agritourism, outdoor recreation, and cultural tourism.

Participation in the conference was primarily Extension educators and other service providers, plus researchers interested in tourism and community development.

Outputs included 209 conference attendees, 93 oral presentations, 30 poster presentations, and a website with conference materials including presentations.

Figure 1.
Logic Model for the 2006 National Extension Tourism Conference

Logic Model for the 2006
National Extension Tourism Conference

Logic Model for the 2006
National Extension Tourism Conference

The situation, inputs, activities, participation, and outputs of NET2006 are similar to those of many other national Extension conferences. However, the measurement of short-term outcomes is usually the focus of most national conference evaluations. An evaluation measuring short- and medium-term outcomes is noteworthy for future national Extension conferences, especially because documentation of these outcomes has not previously been published.

Short-term outcomes are focused on knowledge gained at the conference and measured immediately following the conference. We also measured knowledge gains 6 months after the conference to assess the permanency of the knowledge gains. Medium-term outcomes emphasize actions taken by conference attendees. These actions include developing new programs, expanding or improving existing programs, and applying for additional grants. We measured medium-term outcomes through a Web survey conducted 6 months after the conference. Long-term outcomes focus on changes in conditions due to actions taken by attendees. Examples from the National Extension Tourism Conference include new Extension programs reaching new audiences and resulting in stronger long-term programming outcomes. Because of the 6-month period of time of the study, long-term outcomes were not identified.

The hypotheses related to short-term outcomes and permanency of knowledge gains are as follows:

  1. There is no significant difference between the on-site and post-conference evaluations regarding the proportions of respondents identifying an increased understanding of tourism issues (p ≤ 0.05; two-independent-sample z-test).

  2. There is no significant difference between the on-site and post-conference evaluations regarding the proportions of respondents identifying an increased number of tourism contacts (p ≤ 0.05; two-independent-sample z-test).

  3. There is no significant difference between the on-site and post-conference evaluations regarding the proportions of respondents identifying an increased awareness of tourism-related programs (p ≤ 0.05; two-independent-sample z-test).

  4. There is no significant difference between the on-site and post-conference evaluations regarding the proportions of respondents identifying improved access to information and other tourism resources (p ≤ 0.05; two-independent-sample z-test).

Medium-term outcomes are assessed by examining the proportion of respondents who actually improved programs, products, services, and research; better met the needs of tourism operators; developed new partnerships; applied for grants; used information obtained at the conference in an existing project; organized a conference or workshop; and improved or initiated a research project. Levels of implementation for each of the medium-term outcome activities were identified according to the following categories: slight implementation (i.e., identified by 1% to 33% of respondents), moderate implementation (34% to 66% of respondents), and high implementation (67% to 100%).


The National Extension Tourism Conference is held approximately every 2 years in the United States, rotating its location among each of the four Regional Rural Development Center regions. Hosted by the National Extension Tourism Design Team (a committee of Extension professionals created by the United State Department of Agriculture in the mid 1990s), the National Extension Tourism Conference is designed to provide Extension and tourism professionals with up-to-date information about tourism outreach and education and research efforts and to provide networking opportunities. In 2006, the National Extension Tourism Conference was held September 10-13 in Burlington, Vermont. Attracting 209 attendees from 36 states, three Canadian provinces, and several other countries, the conference provided learning experiences through 93 oral and 30 poster presentations as well as numerous networking opportunities through field trips and receptions.

To evaluate the short- and medium-term outcomes of NET2006 on Extension programming, two surveys were conducted: one on-site, hardcopy questionnaire completed by attendees at the conference, followed by a Web survey completed by attendees 6 months after the conference. In both cases, the population of conference attendees was surveyed. To protect the confidentiality of respondents, a matched sample design was not used. We recognize that using different survey instruments (hardcopy questionnaire vs. Web survey) could potentially pose problems for a population where computer access and computer literacy are concerns. We do not believe this is an issue for NET2006 attendees, because we had email addresses for all attendees.

The on-site questionnaire was created by the conference organizers based on conference evaluations used to measure short-term outcomes in New England and New York. The survey instrument was pre-tested by members of the National Extension Tourism Design Team and revised accordingly. The questionnaire was inserted into the conference program packets distributed to attendees. Announcements were made during the conference to inform attendees of the location of the evaluation form in their packets as well as the fact that a second evaluation would be conducted 6 months after the conference. A box for completed evaluations was prominently displayed on the registration table. The final session closed with a reminder to hand in completed questionnaires before leaving the conference site.

Six months after the conference, a second evaluation was conducted via the Internet. To ensure validity and reliability, both of which can be a problem with Web surveys (Couper, Traugott, & Lamias, 2001), widely accepted protocols for Internet surveys recommended by Dillman (2007) and Best & Krueger (2004) were carefully followed. The questions were similar to those used on the first (i.e., on-site) evaluation. One change was the rewording of the fourth question to include a broader definition of outputs and intended audiences. The question was changed from "Will you develop programs/products/services for tourism and recreation operators in the next 18 months?" to "Did attending the conference improve your programs, products, services, research, etc., for your intended audiences?" An additional question in the second survey followed the fourth question and asked how the conference contributed to attendees' productivity in other ways. Attendees were also asked if they had completed the first (i.e., on-site) questionnaire.

Data from both questionnaires were entered into EXCEL, and descriptive statistics were generated. Significance testing was completed using SPSS. The proportions of respondents answering "yes" to questions on the first and second evaluations were compared by using two-independent-sample z-tests (p ≤ 0.05). In addition, one question on the second evaluations (i.e., "Did attending the conference improve your programs, products, services, research, etc. for your intended audiences?") was used to identify if response differences existed between those who completed both evaluations and those who completed the second one only. Two independent-sample z-tests were used to compare the proportions of affirmative responses from these two groups (p ≤ 0.05). Qualitative analysis was used to categorize responses to open-ended survey questions (Patton, 2002).


A total of 69 attendees responded to the on-site survey, for a response rate of 33% out of the adjusted attendee population of 207. (Both authors, who were conference attendees and planners, did not complete the questionnaire because of potential bias and, thus, were omitted from the total attendee population of 209.) Interestingly, the response rate was higher for the post-conference Web survey, with 96 individuals responding, for a response rate of 46%. The high response rate for the Web survey strengthens the assessment that using different survey instruments (hardcopy questionnaire vs. Web survey) did not pose problems for the target population, who had adequate computer access and literacy to complete a Web survey.

To assess the potential for bias stemming from respondents who answered only one of the two surveys, we selected a key question regarding our primary topic of concern: improvements in Extension programming. No significant difference (p = 0.48) was identified for this question between the proportion of respondents who completed both evaluations and answered in the affirmative (0.91; n = 58) and the proportion of those who completed the second evaluation only and answered in the affirmative (0.90; n = 18). Thus, it appears unlikely that completing the first evaluation influenced responses on the second for questions related to improvements in programming.

For the on-site evaluation, most of the attendees (74%) were Extension professionals. In addition, 4% were researchers, 4% were state agency employees, 4% were students, and 14% were involved in other aspects of tourism education (e.g., non-governmental organization employees, Canadian government officials, education council members, and private tourism business operators). The post-conference evaluation had slightly different results, with 67% of respondents involved in Extension, 7% in research, 4% in state agencies, 5% students, and 17% involved in some other aspect of tourism education. Seventy-five percent of the respondents to the post-conference evaluation had completed the on-site evaluation.

Comparisons of the two evaluations were conducted to assess the permanency of knowledge gains. This comparison reveals similarities between the responses given at the conference and those given 6 months later for the short-term outcomes (Table 1). For instance, 97% of respondents to the post-conference evaluation felt that their understanding of tourism issues had been increased by the conference (hypothesis 1), a percentage that is not significantly different from the 96% who indicated an increased understanding of tourism issues on the on-site questionnaire. Similarly, high percentages of respondents indicated that NET2006 increased their contacts with others working to support tourism (97% on-site; 96% post-conference; hypothesis 2), increased their awareness of programs related to tourism (97% on-site; 97% post-conference; hypothesis 3), and helped them access information and resources on tourism (99% on-site; 94% post-conference; hypothesis 4). No significant differences were identified between the proportions of respondents indicating "yes" for any of these variables (p ≤ 0.05).

Table 1.
Responses to Statements Concerning Knowledge Outcomes

StatementOn-Site Evaluationa ResponsesPost-Conference Evaluationb Responses
Attending this conference increased my understanding of tourism issues (an=67; bn=96). 96% 4% 97% 3%
Attending this conference increased my contacts with others working to support tourism (an=68; bn=94). 97% 3% 96% 4%
Attending this conference increased my awareness of programs related to tourism (an=67; bn=95).97% 3% 97% 3%
Attending this conference helped me access information and resources on tourism (an=67; bn=94). 99% 1% 94% 6%

A high proportion of respondents indicated on the post-conference evaluation that they had implemented actions related to medium-term outcomes. Specifically, a high percentage of respondents (92%; n = 95) indicated on the post-conference evaluation that attending the conference had improved their programs, products, services, and research for their intended audiences. A high percentage (73%; n = 91) also reported that the conference helped them better meet the needs of tourism operators.

When asked on the post-conference questionnaire how attending the conference contributed to respondents' productivity in other ways, additional medium-term outcomes influenced by NET2006 were identified from a list of options (Table 2). A high percentage of respondents (88%) indicated that they had submitted a grant proposal resulting from new collaborations and information received at the conference. Moderate levels of implementation were noted for respondents who had used conference information in an ongoing project (41%), organized a conference or workshop (36%), developed new partnerships (27%), and/or initiated or improved a research project (28%) based on their attendance at NET2006.

Table 2.
Responses to the Post-Conference Evaluation About Additional Ways the Conference Contributed to the Respondents' Productivity

Additional Actions TakennResponse
Submitted a grant proposal93 88% 12%
Used information in an existing project8041% 59%
Organized a conference or workshop8436% 64%
Developed new partnerships8427% 73%
Initiated or improved a research project1828% 72%

Comments included other tangible outcomes from networking such as receiving job offers and graduate study opportunities. Examples of comments pertaining to specific actions taken include:

  • "I have developed agritourism marketing workshops based on the knowledge that I acquired solely through my attendance at NET 2006."

  • "Since the conference, we have held a Wildflower Conference and are developing a paddling trail."

  • "I am incorporating information I learned from attending one of the sessions into an educational outreach program I am offering to producers of handmade and homegrown products."

  • "Used the idea of a 'quest' from a presentation to turn our public art project into a scavenger hunt and education project as well, utilizing facts on the sculptures and web log in to 'quiz' participants on what they've learned."

  • "I have used the idea of regional tourism to develop a program that brings 13 regional farmers market managers together to talk about the problems they all share in promoting their markets."

  • "The improvements to my programs are attributable to the contacts made and information shared at the conference."

  • "Supported our vision of the Festival we are re-organizing into a business model. I also used conference information to inform local hotel/motel tax committee about Baby Boomer tourists, so committee can make educated decisions about distribution of money in grant proposals."

  • "Saw outside the box ideas to incorporate in programs."

  • "We utilized conference resources to establish workshop topic themes used to create the 2007 New Mexico Rural Tourism Conference event scheduled for April 26-27, 2007 in Carlsbad, New Mexico."

  • "Allowed us to extend a national program by incorporating new cooperators; resulting in a better product and service; as well as increasing support for the participating cooperators."

  • "Integrated findings into university courses on community tourism development. Established research project based on opportunity presented at conference. Currently seeking ways to include extension perspective in existing tourism societies/conferences."

  • "Improved research through increased awareness of accessible online resources, and self help manuals for operators. Improved services through sharing lessons learned on the topic of ag tourism alliances and building ag tourism destinations."

  • "Developed a new program based on Bucket Head Bob from Kentucky."

Discussion and Recommendations for Conference Planners

The objective of the study reported here was to assess the short- and medium-term outcomes of NET2006. The first hypothesis regarding permanency of knowledge gains (i.e., there is no significant difference between the on-site and post-conference evaluations regarding the respondents' increased understanding of tourism issues) was supported by the data. In other words, upon leaving the conference, most attendees who had indicated an increased awareness of tourism issues as a result of the conference maintained this perception 6 months later. Similar results were identified for the second, third, and fourth hypotheses related to the respondents' increased tourism contacts, increased awareness of tourism-related programs, and help with access to information and resources on tourism, respectively.

The assessment of medium-term outcomes reveals high to moderate levels of implementation of different programmatic efforts in the 6 months following NET2006. According to survey respondents, the conference contributed to several important types of Extension efforts, including grant proposals, other conferences and workshops, and on-going projects. Networking, an objective of the conference, was also shown to be an important outcome by respondents regarding both informal contacts and formal partnerships. The value of Extension conferences may truly be in their ability to provide ideas and information that lead to successful programs and partnerships.

Two major implications for planning and evaluating conferences were identified from the NET2006 evaluation. First, as indicated by the lack of significant differences for the hypotheses, on-site evaluations likely provide an adequate indication of intentions of conference participants to conduct future Extension programming. Utilizing on-site evaluations may be an adequate means of evaluating intended short-term outcomes of conferences on Extension programming. However, follow-through on intended actions (i.e., medium-term outcomes) cannot be assessed using an on-site evaluation, and a second survey at a later date is necessary to identify these realized impacts. If the "complete picture" of conference impacts is needed, then a post-conference evaluation is necessary.

Second, the higher response rate of the post-conference evaluation suggests that Web surveys could be used in conjunction with or instead of on-site (i.e., hardcopy) questionnaires to elicit better response rates for evaluations. If a Web survey is used, however, distribution of the questionnaire to conference attendees should take place via email immediately following the conference to prevent any recall bias. This type of survey will only be effective if conference attendees have adequate access to email and the Internet (as do Extension personnel).

The study is a first step toward assessing the impacts of national conferences on Extension programming. Although the impacts may be difficult to identify and measure, the results of the study indicate that many positive short- and medium-term outcomes resulted from the 2006 National Extension Tourism Conference. Further research on outcomes is needed. We recommend that planners of national Extension conferences evaluate short- and medium-term outcomes of conferences to assess the value of conferences for contributing to Extension programming and to strengthen the impacts of future conferences. By examining several types of Extension conferences over a long period of time, we will improve our ability to design conferences that maximize the potential for positive impacts on Extension programming.


Arnold, M. E. (2002). Be "logical" about program evaluation: Begin with learning assessment. Journal of Extension [On-line], 40(3) Article 3FEA4. Available at:

Barrett, G. J., Swanson, P. W., & Song, A.V. (2005). Evaluation of training program for caregivers to aging adults. Journal of Extension [On-line], 43(3) Article 3RIB6. Available at:

Best, S. J., & Krueger, B. S. (2004). Internet data collection. Thousand Oaks, CA: Sage.

Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65, 230-253.

Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Earnest, G. W. (1996). Evaluating community leadership programs. Journal of Extension [On-line], 34(1) Article 1RIB1. Available at:

Guion, L. A., Turner, J., & Wise, D. K. (2004). Design, implementation, and evaluation of an elder financial abuse program. Journal of Extension [On-line], 42(3) Article 3FEA6. Available at:

Nagler, A., Bastian, C. T., Hewlett, J. P., & Weigel, R.R. (2007). Risk Management for Ag Families: Evaluation of an integrated educational program for producers on the Northern Plains. Journal of Extension [On-line], 45(3) Article 3RIB3. Available at:

Patton, M. Q. (2002). Qualitative evaluation and research methods (3rd ed.). Thousand Oaks, CA: Sage.

Radhakrishna, R., & Martin, M. (1999). Program evaluation and accountability training needs of Extension agents. Journal of Extension [On-line], 37(3) Article 3RIB1. Available at:

Rennekamp, R.A., & Arnold, M.E. (2009). What progress, program evaluation? Reflections on a quarter-century of Extension evaluation practice. Journal of Extension [On-line], 47(3) Article 3COM1. Available at:

Scott, A. R., Reed, D. B., Kubena, K. S., & McIntosh, W. A. (2007). Evaluation of a group administered 24-hour recall method for dietary assessment. Journal of Extension [On-line], 45(1) Article 1RIB3. Available at:

Taylor-Powell, E., & Henert, E. (2008). Developing a logic model: Teaching and training guide. Madison, WI: University of Wisconsin-Extension. Retrieved October 17, 2008, from:

Wholey, J .S. (1979). Evaluation: Promise and performance. Washington, DC: Urban Institute Press.