The Journal of Extension - www.joe.org

October 2015 // Volume 53 // Number 5 // Feature // v53-5a2

A Framework for Identifying Implementation Issues Affecting Extension Human Sciences Programming

Abstract
Extension programs based on identified needs, relevant theory, and solid research too often fail to realize their objectives. Program implementation is acknowledged to contribute to program effectiveness, yet systematic attention has not been paid to the array of implementation issues that can complicate achieving program goals. We developed the multilevel Implementation Issues Framework (IIF) to guide the identification and analysis of factors contributing to the ability of a program model to achieve its intended outcomes. The IIF can be used to complement logic models, inform process evaluation efforts for new and multisite programs, and support the implementation of evidence-based programming.


Ellen Abell
Extension Specialist/Associate Professor
Alabama Cooperative Extension System
Auburn, Alabama
abellel@auburn.edu

Rebekah Cummings
Adjunct Instructor
Appalachian State University
Boone, North Carolina
cummingsra@appstate.edu

Adrienne M. Duke
Extension Specialist/Assistant Professor
Alabama Cooperative Extension System
Auburn, Alabama
amd0046@auburn.edu

Jennifer Wells Marshall
State Leader for Program Evaluation
Alabama Cooperative Extension System
Auburn, Alabama
wellsja@aces.edu

Introduction

"The most likely point of failure of a program is not weaknesses in the conceptual design but failures in implementation" (Hughes, 1994, p. 76). Since this statement, made over 20 years ago, it has become a commonplace to acknowledge the importance of implementation for Extension program effectiveness (Bush, Mullis, & Mullis, 1995; Decker, 1990; Rennekamp & Arnold, 2009). Although there is wide recognition among Extension personnel that real-world issues affect implementing a conceptual design (i.e., program model) as planned (Duerden & Witt, 2012), the array of factors contributing to a program implementation's success or failure remains largely unspecified. Furthermore, the inputs and outputs identified in any given logic model presume that they will perform as planned. To the extent that factors influencing program implementation are poorly defined and assumptions about the performance of program inputs and outputs are unexamined, they can be said to inhabit a "black box" with the capacity to interfere with the translation of a program model into effective programming.

Our collective professional experience has taught us that ignorance of the contents of the black box obscures effective program planning, reduces recognition of barriers that may contribute to ineffective implementation, and narrows the focus of process evaluation procedures capable of showing both program strengths and needs for adjustment. Our purposes here are to unpack the contents of the box—identifying and systematically organizing a range of implementation issues—and offer examples of questions designed to reveal assumptions about the operation of program inputs and outputs. We thereby hope to strengthen the analysis of program design and implementation issues affecting the ability of Extension programming to achieve its goals.

Framing Program Implementation Issues

Research across a variety of disciplines has drawn attention to the limited nature of our understanding about implementation processes. Work in the health promotion field has recognized the need to systematically evaluate the implementation process to ensure that an intervention is conducted as planned (Steckler, Goodman, McLeroy, Davis, & Koch, 1992; Brownson, Baker, Leet, Gillespie, & True, 2011). Researchers in the early childhood development field have noted that the scientific knowledge base guiding early childhood policies and programs is constrained by the relative lack of rigorous, systematic program implementation evaluations (Boethel, 2004; Shonkoff & Fisher, 2013). In the home visiting field, the need to more systematically address implementation features such as fidelity of curricula delivery and staffing characteristics has been explicitly identified, as well as the need to specifically design research that helps improve program quality and implementation (Gomby, Culross, & Behrman, 1999; Watson, White, Taplin, & Huntsman, 2005).

In these and other areas of practice associated with human sciences-related programing, we conducted a review of literature about implementation in an effort to explain inconsistencies we observed across multiple replications of a successful parenting education program (Cummings, 1999). The collection of this empirical and practice-informed work began with a focus in the family life education and evaluation literature that was subsequently broadened to include work in the home visiting and health promotion fields. We also considered recommendations from the diffusion of innovation and effective prevention programming literatures. Although these efforts resulted in identifying a variety of individual factors, absent was any structure to support systematic, critical thinking about their relevance for our programing.

Consequently, we sorted the identified implementation factors into five categories: conceptual design, participants, staff, organizational climate, and community (Cummings, 1999). We recognized in these categories a progression from microlevel to macrolevel structures reminiscent of Bronfenbrenner's (1979) ecological model of human development. This prompted us to regard the conceptual design as similar to the developing child at the center of a multilevel framework. We further organized the factors within each category into smaller subgroups, or focus areas (as listed in Tables 1-4, presented later). Figure 1 schematically represents our arrangement of these categories into the Implementation Issues Framework (IIF). We reconstructed a definition of the implementation process to align with this arrangement, as follows: Program implementation consists of the actions taken to transform a program's conceptual design into programmatic efforts capable of achieving identified outcomes given a particular set of participants and staff within a specific organizational climate and community.

Figure 1.
Schematic of the Implementation Issues Framework.

As Figure 1 illustrates, the conceptual design initially stands alone, on the far left of the schematic, as an untested blueprint of the program's research base, objectives and desired outcomes, audience, key and adaptable features, methods and procedures, resources and materials, and evaluation plan. The conceptual design is represented a second time, situated at the center of the context in which it is implemented and thereby subject to the influence of implementation-related factors that may come into play as it is translated into action. The four implementation spheres of influence orbit around the conceptual design-in-action, illustrating the idea that factors found within each sphere may impact and be impacted by issues in one or more of the other spheres.

The two arrows connecting implementation and short-term outcomes illustrate that (a) implementation issues influence the capacity of the program to achieve identified outcomes and (b) evaluation data about short-term outcomes can be fed back into the implementation process to identify and guide revisions and improvements to modifiable implementation features and aspects of the conceptual design. These arrows depict a feedback loop that can be described as an iterative evaluative process occurring simultaneously with design and implementation processes. The bidirectional arrow between the program implementation process and long-term outcomes indicates that implementation factors can directly influence long-term outcomes, and knowledge of long-term outcomes can offer insight into relations among conceptual design features and implementation factors. Finally, program development and implementation occur within the macro-environment and are thus subject to political, economic, and cultural influences at state and national levels which impact the ability of a program to achieve its goals.

Since the development of this framework, we have used it as a lens through which to view the development of our Extension programming. The IIF has also informed our decision-making about multisite programming, especially when a program implemented effectively in one or more locations appears to be struggling to replicate in other locations (Cummings, 1999; Lowry, 2002; Wells, 2005). In the following sections, we provide an overview of each level of the IIF, beginning with the essential aspects of an effective conceptual design.

Conceptual Design

A quality conceptual design is based on comprehensive research information and a clearly operationalized theoretical perspective (Hughes, 1994; Price, Cowen, Lorion, & Ramos-McKay, 1989; Steckler et al., 1992; World Health Organization, 2012). It establishes clearly stated program goals that function as the foundation for service philosophy, delivery methods, and outcome assessment criteria (Blase & Fixsen, 2013). The nature, intensity, and duration of program services are explicitly outlined, and activity plans and directions for facilitating processes designed to achieve program goals are provided (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Fixsen, Blase, Naoom, & Wallace, 2009; Hughes, 1994; O'Donnell, 2008). The design spells out the intended characteristics of program participants, recruiting methods, and strategies to reduce the barriers to participation (Hughes, 1994; Price et al., 1989; Daro, McCurdy, Falconnier, & Stojanovic, 2003; Westmaas, Gil-Rivas, & Cohen-Silver, 2007). It takes into account the contextual realities of the target audience, considering the social and environmental contexts in which the intended program participants live (Halgunseth, Peterson, Stark, & Moodie, 2009).

A well-made conceptual design outlines the staffing patterns required to conduct the program, including the number of staff members, desired qualifications, responsibilities, and the skills needed to deliver the proposed programming to the targeted audience (Powell, 1993; Price et al., 1989; Wasik, 1993; Fixsen et al., 2005). It identifies the resources and materials needed to carry out program goals, and it incorporates preservice and inservice training to support staff in content knowledge, resource use, and organizational processes and procedures (Parcel, Perry, & Taylor, 1990; Wasik, 1993). The design supports staff retention by building in transparent processes to monitor and address staff working conditions, such as atypical hours, stressful situations, and safety concerns (Brookes, Summers, Thornburg, Ispa, & Lane, 2006; Fixsen et al., 2005; Wasik, 1993).

Finally, integrated into the quality conceptual design is an evaluation plan that specifies processes and procedures for the regular monitoring of program activities and analysis of progress toward intended outcomes (Hughes, 1994). This plan involves more than simply reporting inputs and outputs; it outlines a process for assessing program strengths and weaknesses and examining program effectiveness that is critical to program improvement efforts. Putting such a process in place facilitates the identification of core features responsible for achieving program goals while also providing assurance to program funders and stakeholders that systematic, meaningful efforts are addressing their goals and interests (Jacobs, 1988).

In summary, the conceptual design is a detailed road map for putting important ideas and goals into action and possesses the potential to guide program adjustment, expansion, and multisite replication. While a well-conceived conceptual design is necessary, it is not alone sufficient to ensure the effectiveness of a program. The translation of the design into effective program implementation will require adjustment to the realities associated with the particular participants, staff, organizational climate, and community context.

Program Participant Sphere

The translation of the conceptual design into programmatic action is influenced by participant characteristics, participants' social circumstances, and the match between participant needs and program goals. Table 1 contains a list of specific implementation factors noted in prior research within each of these three focus areas. (A complete set of references for these factors can be found at the link provided in the table note.) We also provide a noncomprehensive list of sample questions in each focus area that could begin an analysis of whether and how these factors may affect intended program outcomes. As program implementers consider these questions, they may recognize the need to generate alternative or substantively different questions that more closely reflect the particular circumstances and characteristics of their targeted program audience.

Table 1.
Implementation Focus Areas, Factors, and Sample Questions in the Program Participant Sphere
Focus areas Focus area factors Sample questions for implementers
Individual characteristics

Age, sex

Pre-existing skills, knowledge, behavior, values, attitudes

Willingness/ability to discuss sensitive issues

Internal resources, e.g., emotional and physical health, social competence, education, etc.

What skills/knowledge do participants already show regarding program content/goals?

Do participants believe program goals/content are important, relevant, and valuable?

How does the program show respect for differences in participants' knowledge, values?

Does the program inform the participant of the confidentiality of information sharing activities?

Are program activities adaptable to a range of literacy and cognitive levels?

Social context

Culture, family type, language

Socioeconomic conditions, e.g., income and employment status

Economic and residential security

History of discrimination

Family system and social support network

How does the program reflect or respect the culture and daily lives of the participants?

What level of educational background is expected of participants by the program?

Given participant socioeconomic status, do program activities and recommendations allow participants to fully participate and progress in the program?

How does achieving program goals affect participants' relationships to other family members?

How do members of the participants' social network support or hinder participant involvement?

Participant-program match

Participant acceptance of program goals

Ability of program to meet needs of participants

Participant perceptions of staff

What reasons do participants give for participating in the program?

What benefits do participants report they expect to obtain from participating?

How does the program address additional needs expressed by participants?

How do staff members communicate their interest in and involvement with participants?

How do participants perceive staff motivations to work with them?

Note: A complete set of references for these factors can be found at https://aurora.auburn.edu/handle/11200/48507.

Program Staff Sphere

A second set of issues arising in the translation of a program design into action involves program staff. In this sphere, focus areas to consider include (a) staff members' background as it contributes to the quality of interactions with program participants, (b) the ability of staff members to carry out the activities of the program, and (c) attention to the professional development and recognition of staff. Table 2 outlines implementation factors regarding program staffing concerns. A noncomprehensive set of questions regarding selected program staff factors provides examples for examining program assumptions about how these factors may operate to influence program implementation efforts.

Table 2.
Implementation Focus Areas, Factors, and Sample Questions in the Program Staff Sphere
Focus areas Focus area factors Sample questions for implementers
Professional background related to the specific programming environment

Extension staff's level of professional education and experience

Extension staff's commitment to program objectives

Extension staff's knowledge of the community

Extension staff's ability to respect participants' values and beliefs

How does the educational background of Extension staff facilitate/complicate their ability to relate to participants and convey program content appropriately to them?

To what extent are program content and explicit objectives consistent with Extension staff's own beliefs about/understanding of appropriate goals for participants?

How does Extension staff's knowledge of the community influence attitudes and behavior about participants?

To what extent is Extension staff aware of and understand the strengths and needs of participants and the challenges they face?

How and to what extent does Extension staff establish credibility with their audience?

Skills and competencies

Extension staff's interpersonal competence and communication skills

Extension staff's ability to convey program information well

Extension staff's ability to respond to participants sensitively and professionally

Extension staff's use of a solution-focused approach

Extension staff's ability to work as part of a team

Are the interpersonal skills of Extension staff sufficient for building relationships with participants?

Do Extension program staff listen to, accurately reflect, and respond to participant needs and input?

What problem-solving skills do Extension staff members employ to address problems and barriers in order to meet participants' needs?

What is the quality of the working relationships between Extension staff members (e.g., in terms of joint decision making processes, division of labor, cooperation, etc.)?

To what extent do Extension staff members persist in their efforts when faced with discouraging events?

Training and supervision

Sufficient and regular staff training

Attention to professional growth

Regular job performance feedback

Recognition of staff accomplishments

What is the quality and frequency of training provided for staff?

How are needs for professional growth of staff determined?

How often and in what form is job performance feedback provided?

Are staff members made aware of the impact of their work?

What rewards are available to recognize staff accomplishments?

Note: A complete set of references for these factors can be found at https://aurora.auburn.edu/handle/11200/48507.

Organizational Climate Sphere

This sphere refers to aspects of the organizational setting in which a program operates. Most programs are connected to a host agency or sponsoring organization (such as Extension) by virtue of receiving financial, supervisory, administrative, and/or physical (e.g., office space) resources to support programming efforts. The organizational climate encompasses the quality of the work environment for program personnel and the relations between the broader program and its sponsoring organization. Table 3 presents the factors and sample questions connected with these two focus areas.

Extension often works in partnership at the community, county, or state level with other organizations sharing common goals. For example, schools, health clinics, and public or private agencies frequently sponsor family life education and prevention-related programming. Awareness of past relationships between the target audience and potential partners may reveal needs for capacity-building or marketing that should take place prior to or during program implementation.

Table 3.
Implementation Focus Areas, Factors, and Sample Questions in the Organizational Climate Sphere
Focus areas Focus area factors Sample questions for implementers
Quality of the work environment

Nature of supervisor-staff relationships

Processes for handling conflict

Openness/trust among staff

Do Extension staff members trust that workplace issues can be discussed and handled professionally?

What issues produce conflicts among Extension staff or between Extension staff and supervisors?

What processes are used to identify, address, and resolve conflicts?

What provisions are made and what rewards are available to recognize Extension staff accomplishments?

How often and in what form is job performance feedback provided?

Relations between the program and host agency

Integration of program with the local Extension office

Relationship of the local Extension office with program audience

Receptivity of local Extension office to program

Extent of Extension administrators' commitment to the program

In what ways is the county Extension office involved in program operations and decision-making?

To what extent do prior relationships exist between the local Extension and target audience and are efforts to improve or build these prior relationships needed?

To what extent are the efforts/goals of the program valued by the local Extension office?

How does the local Extension office demonstrate support for the program?

To what extent are Extension administrators committed to the program?

Note: A complete set of references for these factors can be found at https://aurora.auburn.edu/handle/11200/48507.

Community Sphere

The final set of issues addressed by this framework refers to a variety of community-level factors suggested to affect effective program implementation. Focus areas relevant to the successful implementation of the conceptual design include community characteristics, community resources, and program involvement. Examining a program's intended goals and objectives in light of community characteristics such as local values, norms, and behavior patterns can determine whether they may present challenges to the program. Furthermore, such an examination can help specify how the community could be involved in the program and what types of resources may encourage attainment of program goals. Table 4 presents factors pertaining to these community issues and offers examples of questions that could be used to explicitly examine these factors.

Table 4.
Implementation Focus Areas, Factors, and Sample Questions in the Community Sphere
Focus areas Focus area factors Sample questions for implementers
Community characteristics

Local values, norms, and behavior patterns

Needs and constraints

Concerns of local government

Diversity

What actions from this community indicate that they value/support program goals?

What are the needs, constraints, and most important issues facing this community?

To what extent does the program address the concerns of local government?

What types of diversity exist (i.e. racial/ethnic, geographic, social class, linguistic)?

What challenges present themselves because of this diversity (i.e. miscommunication, mistrust, divisions, competition, misunderstandings, and instability)?

Community resources

Existence of related resources, e.g., educational, social, economic, etc.

Resources of the program's host agency

Related activities and services

What other resources with similar goals or designed for a similar audience are available in the community?

To what degree do similar or related services or activities complement or compete with the resources of the host agency?

To what extent are related community services collaborating with or competing for funding, participants, Extension staff, community support, etc.?

Program involvement

Problem awareness and concern

Engagement in addressing the problem

Perception of the program

Local leader support advisory group

To what extent are community members aware of, concerned about, and actively engaged with the issues addressed by the program?

What level of involvement do community leaders show, e.g., through efforts to promote program goals, advocate for resources, influence public opinion and local policy, etc.?

How do community members and leaders perceive the program?

Do members of the advisory group/ coalition represent the concerns and experiences of program participants and the wider community?

How is the advisory group/coalition used in program implementation?

Note: A complete set of references for these factors can be found at https://aurora.auburn.edu/handle/11200/48507.

Implications

An array of factors associated with implementation can be consequential for the actions taken to transform a program's conceptual design into programmatic efforts capable of achieving identified outcomes. The Implementation Issues Framework (IIF) offers a structure meant to capture the multiple levels at which these factors manifest. The IIF does not attempt to identify every potential influence on the effectiveness of a program. Its utility is found in the organization it provides for systematically thinking about the context in which a program is implemented. The IIF can serve as an aid in program planning with respect to the analysis of the issues that could support or potentially interfere with the implementation of a program. It can also be used to guide the problem-identification process when a program fails to achieve its key objectives, or, alternatively, to pinpoint and strengthen implementation features contributing to its success. In addition, given that efforts to replicate successful Extension programs in one or more locations are common, the IIF can be used to guide planning and problem-solving related to factors that may differ from the original implementation context. Ultimately, the framework provides a basis for identifying and assessing implementation factors that may be important to carrying out and evaluating programs and their replications.

How does the IIF fit with other approaches used to address program development, implementation, and evaluation? Extension has productively embraced the logic model as an essential tool in the planning, reporting, and evaluation of programming (Taylor-Powell & Boyd, 2008). A completed logic model yields a blueprint of a program's required components and, consequently, is often used as the first step in outlining the actions needed to implement a program to achieve identified outcomes. However, Rennekamp and Arnold (2009) have argued that a logic model should be used for more than identifying and organizing necessary program components; it should represent the linkages among inputs, outputs, and outcomes for the purpose of elucidating the thinking behind why a given program should work as planned.

The IIF supports such detailed thinking. It extends the focus beyond the linear connections between logic model elements to encompass relevant implementation factors across multiple levels and the relationships and interactions among them. The IIF represents program development and implementation as a dynamic, recursive process similar to the systems approach of Bronfenbrenner's ecological model (1979). From this perspective, a child and a newly designed program both have elements internal to themselves, yet these elements are insufficient to effect full development. Interaction with the environment in which either is placed ultimately affects the outcomes. In accepting that programs are influenced by an interconnected system of influences, the IIF becomes a tool to organize and inform reasoned adjustments to program inputs and outputs.

The IIF is also complementary to the pre-implementation, or first, tier of the five-tiered approach to program evaluation (Jacobs, 1988). The purposes of this tier are to define the needs to be addressed by a program, detail the characteristics of the program, and assess the support of community members and organizational structures for the program. The IIF offers examples of issues and factors relevant to pre-implementation considerations. Asking explicit questions about factors pertinent to a program's unique inputs and outputs and their possible linkages serves to inform the decision-making of program planners. Such questions can be asked at any time after the program is underway, that is, at the second and third tiers of Jacobs' (1988) evaluation approach, for the purpose of identifying implementation challenges and accomplishments and diagnosing potential problems.

The young and burgeoning field of implementation science has been developed to support the replication of evidence-based programming. It specifically seeks to better understand the processes necessary to implement evidence-based programs and to use rigorous evaluation methods to document the effectiveness of implementation activities (Fixsen et al., 2005; National Implementation Science Network). Once a program has achieved classification as "evidence-based," it is expected that any organization or community implementing it would succeed in achieving the program's intended outcomes. However, unconsidered factors related to program participants, program staff, the sponsoring organization or program setting, and/or the community may contribute to even an evidence-based program not fulfilling its promise. The IIF offers a point of departure for determining to what extent a county or community may be ready to successfully implement the specific evidence-based program being considered. Similarly, it could be helpful for guiding efforts to identify and build the capacity needed for that locale to become "implementation ready."

Extension professionals understand that programs are living, breathing entities that do not operate in a vacuum and can take on a life of their own. Conditions in communities and organizations influence how planned program activities are carried out. The purposeful consideration of day-to-day implementation issues is necessary for any Extension educator who is responsible for the implementation, functionality, and vitality of a program at the grassroots level. Without attention to the contents of the black box, even the best-conceived and research-informed programming can fail to make an impact. Programs may gradually erode or be prematurely ended because basic implementation features were not considered or monitored. Alternatively, programs may become a façade of effectiveness, simply going through the motions.

Conclusion

Achieving the core principles of Extension relies on effective county and community-based programming. A statewide Extension organization is only as strong as the ability of its personnel to implement programs at the community level. The IIF encourages program implementers to consider the on-the-ground realities of the targeted audience and the program's larger context and to examine the state of the match between these realities and the theoretical or practical assumptions that are being made about how a program is supposed to work. We hope that the framework will provide a common language for practitioners and researchers to use to discuss what is intended to happen in the implementation of Extension programming and how it does or does not, in fact, happen.

References

Blase, K., & Fixsen, D. L. (2013). Core intervention components: Identifying and operationalizing what makes programs work. ASPE Research Brief, U.S. Department of Health and Human Services. Retrieved from ERIC database. (ED541353)

Boethel, M. (2004). Readiness: School, family, and community connections. Austin, TX: Southwest Educational Development Laboratory. Retrieved from: http://www.sedl.org/connections/resources/readiness-synthesis.pdf

Brookes, S. J., Summers, J. A., Thornburg, K. R., Ispa, J. M., & Lane, V. J. (2006). Building successful home visitor-mother relationships and reaching program goals in two Early Head Start programs: A qualitative look at contributing factors. Early Childhood Quarterly, 21, 25-45. doi:10.1016/j.ecresq.2006.01.005

Brownson, R. C., Baker, E. A., Leet, T., Gillespie, K. N., & True, W. (2011). Evidence-based public health. 2nd ed. Oxford New York: Oxford University Press.

Bush, C. M., Mullis, R., & Mullis, A. (1995). Evaluation: An afterthought or an integral part of program development. Journal of Extension [On-line], 33(2). Article 2FEA4. Available at: http://www.joe.org/joe/1995april/a4.php

Cummings, R. (1999). An organizational framework of factors affecting family-based program implementation: Exploration of community-level factors associated with the success of the Begin Education Early program. (Unpublished master's thesis). Auburn University, Auburn, AL.

Daro, D., McCurdy, K., Falconnier, L., & Stojanovic, D. (2003). Sustaining new parents in home visitation services: Key participant and program factors. Child Abuse and Neglect, 27, 1101-1125. doi:10.1016/j.chiabu.2003.09.007

Decker, D. J. (1990). Analyzing program "failure." Journal of Extension [On-line], 28(3). Article 3FEA7. Available at: http://www.joe.org/joe/1990fall/a7.php

Duerden, M. D., & Witt, P. A. (2012). Assessing program implementation: What it is, why it's important, and how to do it. Journal of Extension [On-line], 50(1). Article 1FEA4. Available at: http://www.joe.org/joe/2012february/a4.php

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. (FMHI Publication No. 231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network.

Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19, 531-540. doi: 10.1177/1049731509335549

Gomby, D. S., Culross, P. L., & Behrman, R. E. (1999). Home visiting: Recent program evaluations—analysis and recommendations. The Future of Children, 9(1), 4-26.

Halgunseth, L. C., Peterson, A., Stark, D. R., & Moodie, S. (2009). Family engagement, diverse families, and early childhood education programs: An integrated review of the literature. National Association for the Education of Young Children, The Pew Charitable Trusts. Retrieved from: http://www.naeyc.org/files/naeyc/file/research/FamEngage.pdf

Hughes, Jr., R. (1994). A framework for developing family life education programs. Family Relations, 43, 74-80. doi: 10.2307/585145

Jacobs, F. H. (1988). The five-tiered approach to evaluation: Context and implementation. In H. Weiss & F. Jacobs (Eds.), Evaluating family programs (pp. 37-68). New York: Aldine de Gruyter.

Lowry, E. D. (2002). Program staff factors impacting the effectiveness of the Begin Education Early program. (Unpublished master's thesis). Auburn University, Auburn, AL.

O'Donnell, C. (2008). Defining conceptualizing and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research, 78, 33-84. doi: 10.3102/0034654307313793

Parcel, G. S., Perry, C. L., & Taylor, W. C. (1990). Beyond demonstration: Diffusion of health promotion innovations. In N. Bracht (Ed.), Health promotion at the community level (pp. 209-228). Newbury Park, CA: Sage.

Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: The Guilford Press.

Powell, D. R. (1993). Inside home visiting programs. The Future of Children 3(3), 23-38.

Price, R. H., Cowen, E. L., Lorion, R. P., & Ramos-McKay, J. (1989). The search for effective prevention programs: What we learned along the way. American Journal of Orthopsychiatry, 59, 49-58. doi: 10.1111/j.1939-0025.1989.tb01634.x

Rennekamp, R. A., & Arnold, M. E. (2009).What progress, program evaluation? Reflections on a quarter-century of extension evaluation practice. Journal of Extension [On-line], 47(3). Article 3COM1. Available at: http://www.joe.org/joe/2009june/comm1.php

Steckler, A., Goodman, R. M., McLeroy, K., Davis, S., & Koch, G. (1992). Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion, 6, 215-255. doi: 10.4278/0890-1171-6.3.214

Shonkoff, J. P., & Fisher, P. A. (2013). Rethinking evidence-based practice and two-generation programs to create the future of early childhood policy. Development and Psychopathology, 25, 1635-1653. doi:10.1017/S0954579413000813

Taylor-Powell, E., & Boyd, H. H. (2008). Evaluation capacity building in complex organizations. In M. Braverman, M. Engle, M. Arnold, & R. Rennekamp (Eds.), Program evaluation on a complex organizational system: Lessons from cooperative extension. New Directions for Evaluation, 120, 55-69.

Wasik, B. (1993). Staffing issues for home visiting programs. The Future of Children 3(3), 144-157.

Watson, J., White, A., Taplin, S., & Huntsman, L. (2005). Prevention and early intervention literature review. NSW Department of Community Services.

Wells, J. A. (2005). An exploration of participant-level factors associated with the success of the Begin Education Early program. (Unpublished master's thesis). Auburn University, Auburn, AL.

Westmaas, J. L., Gil-Rivas,V., & Cohen-Silver, R. C. (2007). Designing and implementing interventions to promote health and prevent illness. New York, NY: Oxford University Press.

World Health Organization. (2012). Health education: Theoretical concepts, effective strategies and core competencies: A foundation document to guide capacity development of health educators. Regional Office for the Eastern Mediterranean: World Health Organization.