April 2007 // Volume 45 // Number 2 // Feature Articles // 2FEA2

Previous Article Issue Contents Previous Article

Applied Research Initiative: Training in the Scholarship of Engagement

Abstract
Extension scholarship and research have become key issues in the United States. We describe a process that was developed in Northwest Ohio to teach applied research skills to field educators using classes, projects, and mentors. This is followed by an analysis of formative and summative evaluations of participants, including an 18-month follow-up survey. The evaluations indicated a general greater understanding and usage of applied research methods and increased involvement in academic papers or presentations. As a result of the evaluations, the program has been revised and is being offered statewide.


Gregory A. Davis
Extension Specialist
Columbus, Ohio
Davis.1081@osu.edu

Cynthia Burggraf-Torppa
Center Specialist
Findlay, Ohio
Torppa.1@osu.edu

Thomas M. Archer
Leader, Program Development and Evaluation
Columbus, Ohio
Archer.3@osu.edu

Jerold R. Thomas
Center Director
Findlay, Ohio
Thomas.69@osu.edu

The Ohio State University


In 1996, Boyer coined the term "Scholarship of Engagement" when he challenged America's colleges and universities to become more involved with the needs and challenges facing our communities and country. He noted that the public had lost confidence in the ability of institutions of higher education to contribute to the search for solutions to our social, economic, civic, and ethical problems. This perspective may reflect some truth in that the academy has traditionally revered scholarship above teaching and service, and scholarship has traditionally been defined as research activities that formulate, expand, or evaluate theory (Ary, Jacobs, & Razavieh, 1996).

In contrast, Extension is widely recognized as the arm of the academy that engages the public and directly addresses social, economic, civic, and ethical problems. To this end, many academic units turn to Extension as a conduit to distribute their research findings. Because of this, Extension professionals and systems have been criticized for being soft on scholarship because they do not conduct studies that advance theories.

There is no question that colleges and universities--especially Land-Grant universities--are accepting Boyer's (1996) challenge to "connect the rich resources of the university" (p. 11) to community needs and problems (Maurrasee, 2001; Sandmann, 2002; Zimpher, Percy, & Brukardt, 2002). This challenge has created a greater awareness of the importance of applied research and the role that engaged scholarship can play in supporting the mission of Land-Grant universities.

While all research attempts to discover or establish facts or principles within a particular field, research has traditionally focused on activities that test and advance theory (Ary et al., 1996). Only recently has the new focus on the scholarship of engagement elevated applied research, which focuses on solving problems or taking advantage of opportunities to serve communities (Andranoich & Riposa, 1993), to an elevated position within the academy. With its rich history of community involvement and solution-oriented action, Extension is poised to take the lead in the scholarship of engagement.

This article describes a process that started in the mid-1990s in Northwest Ohio called the "Applied Research Initiative" (ARI). The objectives were [1] To help Extension professionals better understand not only the basics of conducting applied research and [2] To encourage Extension professionals to formally add a scholarship component to their existing work.

We argue that Extension professionals already conduct a wide variety of applied and action research (Andranoich & Riposa, 1993; Boyer, 1996) that exemplifies the goal of engaged scholarship. Where Extension often fails is systematically applying scientific tools and procedures to document and share the impact of their programs. A key concept in the Applied Research Initiative was that Extension programming inherently includes substantial elements of scholarship. By making a few minor adjustments, Extension professionals could structure their programs within an applied research framework that would allow those programs to be rigorously evaluated, validated, and shared with peers.

History/Development of the Program

The current Applied Research Initiative developed from several formal daylong programs that were piloted over the last decade. Most of these programs followed a moderated panel discussion format with professionals, some of whom were Extension. These programs provided a variety of "hands-on" experiences designed to allow participants the opportunity to learn from peers with more applied research experience. Specific objectives were to:

  • Understand the need for "scholarly work" for faculty and non-faculty agents

  • Identify projects with a potential applied research component

  • Understand how to measure things we are working on

  • Present the components of a larger "project" as they are developed

  • Understand how to organize your material

  • Recognize good and bad examples of posters, papers, etc.

  • Use imagination in looking for outlets to share work

An important outcome goal of these initial programs was the awareness that Extension professionals who were new to the research process needed "mentors" to guide and support them through the various phases of the applied research process. Another outcome goal was identifying the need for program evaluation and data collection assistance.

To help meet these needs, mentoring was built into the Applied Research Initiative. In addition, a series of Program Planning & Impact Documentation in-services for all program areas were conducted on a regional level. These in-services introduced the LOGIC model as a tool for program planning and evaluation, and used examples of applied research from each program area for relevancy. These 4-hour in-services, led by personnel from the state program development and evaluation unit, were conducted with the following objectives:

  • Identify differences between needs assessment, formative, and summative evaluations

  • Schematically complete a program logic model for a project

  • Realize the variety of potential methods by which to collect data for evaluation of projects

  • Design a tentative evaluation plan for a project

  • Match at least one evaluation method to a component of a project

  • Identify sound impact statements

These Program Planning and Impact Documentation in-services evolved into the current four-phase Applied Research Initiative.

ARI Program Outline

The ARI was conceived as an ongoing, four-phase, personal, professional, and organizational development effort. The four phases were designed to take into account participants' varied knowledge levels and degrees of interest. Each phase focused on a discrete part of the applied research process, enabling professionals with little experience to benefit from participating in each of the four phases, while more experienced professionals could participate in the phases they found most beneficial to them.

Phase I

Phase I provided an overview of ARI's objectives and introduced the content planned for subsequent phases. In this daylong program, a presentation of the LOGIC Model provided participants with a bridge between program evaluation and applied research in terms of inputs, outputs, outcomes, and their relationship to program impact and creating knowledge. Levels of impact were also discussed along with types of evidence and methods of gathering evidence. Participants, including field and campus-based staff and faculty, discussed current programs that could be developed into research projects; potential mentors for these projects were identified; and resources that participants would find helpful in the applied research process were shared. Incentives to encourage ongoing participation included the following.

  • Participants would be guided/mentored throughout the process.

  • Participants would be able to author scholarly presentations and articles for publication.

  • Participants would be able to share their outcomes with peers and administrators at an Annual District Conference.

  • Participants would be recognized and rewarded accordingly.

Phase II

Phase II was designed to introduce participants to the mechanics of applied research. Comprised of three, day-long programs, Phase II resembled an abbreviated Research Methods course and included research design and methods, data collection and management, and data analysis and interpretation. Session objectives included understanding basic terminology such as data, instrumentation, qualitative research, quantitative research, survey, questionnaire, reliability, validity, and types of error in the research process. Data collection strategies were discussed, including sampling procedures, question writing, questionnaire design, and methods for maximizing response rates. Preparing a data analysis plan, using statistics to share results, and levels of measurement (nominal, ordinal, interval, and ratio) were presented. Procedures to follow when working with human subjects were also included.

Phase III

Phase III consisted of a District Highlights Conference in which participants would formally present their research project to their peers. The overall objective of Phase III was to provide a forum for Extension professionals to share what was learned about program development; designing evaluation techniques; and managing, analyzing, and interpreting data. The conference would also provide an opportunity to highlight programming suitable for impact evaluation and applied research. It was also envisioned that conference presenters (in a way, "graduates" of the ARI) could serve as mentors for the next group of ARI participants.

Phase IV

The final phase of the ARI involved strengthening Extension professionals' capacity to share their scholarly work with peers beyond the Extension District. Phase IV focused on the background and skills necessary to prepare a conference presentation proposal; write an academic abstract and/or author an academic paper; and identify conferences, journals, and other outlets appropriate for sharing their scholarly work.

Methods

Sample

A total of 26 self-selected Extension professionals took part in the ARI program. Eighteen months after the completion of the program, 81% of participants (n = 21) responded to a Web-based survey. Among those respondents, median length of tenure was 13 years (range = 4 to 31), 68% were county-based professionals, 32% were state-based professionals; slightly more than one third (36%) were tenure track faculty.

Procedures

To document the impact of the ARI, both formative and summative evaluations of the Applied Research Initiative were conducted. Formative evaluation examines whether the procedures undertaken to achieve intended goals are likely to accomplish those goals; it provides ongoing feedback about the strengths and weakness of a process that may facilitate or hinder achievement of its intended outcomes. Summative evaluation, on the other hand, documents whether or not the intended goals were achieved.

Formative evaluation has the added benefit of strengthening the confidence with which researchers can attribute changes in outcomes to the influence of their programming. That is, barring an experimental design with random assignment of subjects to conditions, causality cannot be inferred and the ability to generalize findings is limited. To the extent that findings from a formative evaluation suggest that the program created the antecedent conditions for accomplishing the desired and anticipated outcomes, formative evaluations can also serve as a manipulation check on the effectiveness of the program (Patton, 1994; Scriven, 1994).

Formative Evaluation

Program planners conducted a variety of formative evaluations throughout the program. Program sessions were often taught by guest speakers, many of whom conducted their own formative evaluations. Having a range of formative evaluation processes had both practical and pragmatic benefits. The practical benefits included allowing various guest speakers the flexibility to design their own post-session assessments of participants' learning. The pragmatic benefits of using a variety of types of formative evaluation procedures is that, to the extent that different measurements create a pattern of similar findings, greater confidence in that pattern validity can be accepted (Dick & Carey, 2001; Ertmer & Quinn, 2003).

Formative evaluations were conducted three times during the 18 months of educational programming. In addition, a pre- and post-test assessment of participants' knowledge about the LOGIC model was conducted after the first session. Using paired samples t-tests, statistically significant gains in knowledge and confidence surrounding planning, designing, and conducting applied research were found for each of the various formative evaluations.

The final formative evaluation was conducted shortly after the 18-month-long program ended. This questionnaire asked only open-ended questions; a content analysis of the responses revealed four major themes.

  1. Participants found the program to be valuable and in particular learned that conducting applied research was not incompatible with their current job responsibilities and time constraints.

  2. Participants reported several concrete steps they had learned that would allow them to begin conducting applied research.

  3. Participants reported that the class format was conducive to learning. That is, they enjoyed the informal atmosphere and the ability to ask questions and share expectations.

  4. Participants noted that the program could be improved by including more opportunities for hands-on learning.

Results

A summative evaluation was conducted roughly 18 months after the final ARI program was conducted using a Web-based instrument. To assess whether or not participants' attitudes toward various components of the applied research process changed (improved) as a result of the ARI, participants were asked to respond to a list of seven topics discussed during the series.

Respondents were asked to check a number on a Likert type scale that signified the extent to which their attitude had become more negative (1) to more positive (7) as a result of the Applied Research Initiative.

As can be seen in Table 1, findings indicated that on average, roughly 55% of participants developed a more positive attitude toward all the topics measured. More than any of the seven topics discussed, 72% of participants indicated a positive change in attitude toward designing applied research projects. Interestingly, attitudes toward the human subjects review process became more negative (24%) as a result of the educational program on that subject.

Table 1.
Change in Attitude Toward Components of Applied Research Initiative

  1
Became More Negative
2 3 4
No Change
5 6 7
Became More Positive
Logic Model 0% 0% 5% 38% 48% 10% 0%
Literature Reviews 0% 0% 5% 43% 48% 5% 0%
Research Design 0% 0% 0% 29% 48% 19% 5%
Research Methods 0% 0% 0% 33% 43% 24% 0%
Statistics 0% 0% 5% 33% 33% 29% 0%
Human Subjects Review 0% 10% 14% 38% 29% 10% 0%
Research that does not involve Human Subjects 0% 0% 10% 57% 10% 24% 0%

To determine whether or not participants perceived themselves to be more competent to conduct applied research as a result of attending the series of classes, participants were asked to respond to a list of 15 topics and skills taught during the series. Respondents were asked to check a number on a Likert type scale that signified the extent to which their competence had greatly decreased (1) to greatly increased (7).

As can be seen in Table 2, findings indicated that on average, 48% of the participants reported an increase in competence in all 15 topics taught. Competency in survey research methods (Tailored Design Method) registered the lowest positive change at 30%. Competency in completing a systematic review of literature and conducting research that does not involve human subjects registered the next lowest positive change at 34%. The mean score for perceived competence indicates that all participants developed a degree of research competence as a result of attending the series of classes.

Table 2.
Perceived Competence Toward Conducting Applied Research

  1
Decreased Greatly
2 3 4
No Change
5 6 7
Increased Greatly
Using the Logic Model to design applied research projects 0% 0% 5% 33% 62% 0% 0%
Understanding the difference between outputs and outcomes 0% 0% 5% 33% 43% 14% 5%
Completing a systematic literature review of related research 0% 0% 5% 62% 29% 5% 0%
Designing research 0% 0% 5% 40% 50% 0% 5%
Selecting appropriate research methods 0% 0% 5% 33% 57% 5% 0%
Determining reliability of an instrument 0% 0% 0% 62% 29% 10% 0%
Determining validity 0% 0% 0% 57% 38% 5% 0%
Identifying errors in a survey process 0% 0% 0% 52% 48% 0% 0%
Using appropriate sampling techniques 0% 0% 0% 48% 52% 0% 0%
Using the Tailored Design Method 0% 0% 5% 65% 30% 0% 0%
Using Likert scales 0% 0% 5% 43% 38% 14% 0%
Critiquing questionnaires 0% 0% 10% 33% 48% 10% 0%
Conducting research with that does not involve human subjects 0% 0% 10% 57% 24% 10% 0%
Telling the research story with statistics 0% 0% 10% 43% 33% 14% 0%
Completing the human subjects review process 0% 0% 10% 48% 38% 5% 0%

Our ultimate goal for this program was to produce behavioral changes in participants. We measured this in three ways. First, participants were asked to describe one thing they had changed as a result of being a part of the program. Responses ranged from being more willing to conduct applied research, to having a more positive attitude about conducting applied research, to having the ability to think more critically about local research opportunities. It was also noted that new peer contacts were made as a result of the program that could serve as an applied research support network.

Second, to assess whether or not participants were using the knowledge and skills gained in the series of classes, participants were asked to respond to a list of 13 behaviors (e.g., Since participating in the ARI, I have used the logic model to design applied research projects). Respondents were asked to check the number of times they had exhibited that behavior and/or taken action in the past 12 months.

Consistent with the improvements in attitude toward conducting applied research and with the increase in perceived competence in ability to conduct applied research, more than 8 out of 10 participants reported using research methods since taking part in the Applied Research Initiative. Slightly more than 70% reported designing an applied research project since participating. Two thirds indicated they had used the logic model to design a research project and had used statistics to tell a research story.

Finally, we wanted to know if the ARI was successful if participants produced a scholarly study as a result of attending the series of classes. To assess the ultimate success of the program, participants were asked to report whether they had submitted, published, and/or presented a scholarly paper since completing the series of classes approximately 18 months earlier. As can be seen in Table 3, findings indicated that 16 scholarly outputs were reported. Presentation at a national conference was reported by 4 of the 21 participants. Four of the 21 participants indicated they had a submission in the review process.

Table 3.
Production of Scholarly Work (in gross output)

Published an applied research study in a peer-reviewed journal 2
Submitted an applied research study that is in the review process 4
Presented an applied research study at a national conference 4
Presented an applied research study at an international conference 2
Presented an applied research study at a regional conference 3
Presented an applied research study at a state conference 3

Conclusions

The major goal of the Applied Research Initiative was to create engaged scholars. That is, we wanted Extension professionals to better understand not only the basics of conducting applied research, but also to encourage Extension professionals to formally add a scholarship component to their existing work. Our findings indicate that these objectives were met. Extension professionals reported significant increases in knowledge about research processes, improved attitudes toward conducting applied research, and greater competence in their ability to produce applied research. While we did not collect scholarly output figures prior to the ARI, we know that at least 15 of our 26 participants published, submitted, and/or presented research findings in 17 papers that documented their contributions to solving local problems and addressing issues of public concern.

Participants found the program format to be useful for learning, networking, and building upon the knowledge they already possessed. Participants' perceptions regarding engaging in applied research activities (both positive and negative) were strengthened. For example, learning more about the hurdles present in conducting applied research (the university's Human Subjects Review process in particular) left participants feeling a bit uneasy. However, participant comments indicated an improved relationship with Extension's Program Development and Evaluation Unit. In addition, participants reported being better able to identify applied research opportunities in the Extension work in which they were already engaged.

We cannot be sure whether or not participation in the ARI "caused" our participants to write, present, and/or publish an applied research project. According to the tenets of the philosophy of science, causality can only be established with a scientific (i.e., experimental or quasi-experimental) research design. Our research design lacked parameters necessary to establish causality. Most notably, our sample self selected to take part in our study (was non-random), and our design lacked a control group with which to compare the changes that occurred in our program group. Thus, we cannot know how many of our participants would have produced scholarly works without attending ARI.

Despite that limitation, the findings from our formative evaluations suggest that it is likely that at least some of outcomes we documented can be attributed to our program. That is, if findings from the formative evaluations are viewed as "checks" on the effectiveness of the program in creating the antecedent conditions that are necessary for accomplishing the desired and anticipated outcomes (having our participants produce scholarly works), then we may be confident that the ARI contributed to the outcomes our participants achieved.

While our evaluation of this program indicated that participants benefited, two shortcomings of the program and the larger organization were exposed. Participants enjoyed the opportunities to learn from peers and mentors, yet they also indicated a need to expand the network of research "mentors" beyond the framework of the program itself. In addition, it was apparent that more encouragement at the organizational level to better integrate applied research activities into local programming would move more Extension professionals to involve applied research activities in their work.

If Boyer (1996) was correct in stating that "the public has lost confidence in our institutions of higher education to address the challenges we face in our communities and county," the success of this kind of training may prove to be critical to Extension's ability to sustain its central role in fulfilling the mission of the Land-Grant university. Extension is in a unique position to foster Scholarship of Engagement and to guide and mentor engaged scholars whom universities around the county are scrambling to produce/promote/develop.

A second-generation ARI has recently has been initiated, revised to address many of the suggestions put forth by past participants, with more individuals desirous of participating than could be comfortably accommodated. Perhaps the combination of the growing awareness of the need for engaged scholarship, along with positive word-of-mouth advertising from our past participants, may account for the growing desire to participate.

References

Adranoich, G. D., & Riposa, G. (1993). Doing urban research. Newbury Park, CA: Sage.

Boyer, E. L. (1996). The scholarship of engagement. Journal of Public Service and Outreach, 1, 11-20.

Dick, W., & Carey, L. (2001). The systematic design of instruction (5th Ed.) Longman Publishers.

Ertmer, P. A., & Quinn, J. (2003). The ID casebook: Cases studies in instructional design (2nd Ed.) Saddle River, New Jersey: Prentice Hall.

Maurrasee, D. (2001). Beyond the campus: How colleges and universities form partnerships with their communities. NY: Routledge.

Patton, M. Q. (1994). Developmental evaluation. Evaluation practice, 15, 311-319.

Sandmann, L. R. (2002). Serving society: The scholarship of engagement. HERSA News: Higher Education Research and Development Society of Astralasia, 24, 4-7.

Scriven, M. (1994). Evaluation thesaurus (5th ed.) Newbury Park, CA: Sage.

Zimpher, N. L., Percy, S. L., Brukardt, M. J. (2002). A time for boldness: A story of institutional change. Boston: Anker Publishing.