Show Summary Details
Page of

(p. 460) Quality Assurance and Program Development 

(p. 460) Quality Assurance and Program Development
Chapter:
(p. 460) Quality Assurance and Program Development
Author(s):

Matthew R. Sanders

, and James N. Kirby

DOI:
10.1093/med-psych/9780190629069.003.0043
Page of

Subscriber: null; date: 13 December 2018

To remain relevant to the needs of contemporary parents, parenting programs need to evolve and be “refreshed.” A variety of innovations or adaptations in both the content and the process of delivering interventions have taken place within the Triple P system. These innovations and adaptations have evolved in the context of seeking better solutions to unmet needs faced by particular client groups or challenges practitioners face in implementing programs to service those needs.

This chapter focuses on the importance of having a continuous quality assurance (QA) process to ensure the continued success of programs. The ongoing search for better solutions to child problems continues to inspire Triple P program developers and researchers to develop and test new solutions for an increasingly wide range of problems, confirming the robustness of the intervention model and its core principles based on social learning theory, cognitive-behavioral principles, self-regulation, and behavior change techniques. We recently described a conceptual framework for program adaptation and innovation (Sanders & Kirby, 2014) to help guide the research and development process from initial theory building to scaling up of interventions for wide-scale sustained dissemination and implementation (Axford & Morpeth, 2013; Hodge & Turner, 2016). This chapter extends that earlier work by describing a QA process that can be used by program developers to facilitate program innovation. We describe a 10-stage research and development cycle that has informed the development of Triple P. Using the Triple P system as the exemplar, we illustrate how the model can be applied continuously from initial program development and program adaptation to global international dissemination to ensure the program is ready for dissemination and can benefit as many families as possible.

(p. 461) The Program Development Process and the Role of Developers

Quality assurance refers to the process used to create and maintain reliable standards of deliverables. QA relates to activities before production work begins and is typically performed while the product is being developed (Crosby, 1984). In contrast, quality control (QC) procedures refer to quality-related activities used to verify that deliverables are of acceptable quality and that they are complete and correct (Stein & Heikkinen, 2009). QC activities are performed after the product is developed. In the context of developing an intervention program designed to solve a specific problem, an iterative process entailing both QA and QC steps is needed to ensure that the program meets the quality standards increasingly demanded by the field of prevention science.

Phases in the Development Process

Recently, we developed a pragmatic model, depicted in Figure 43.1, to assist in the development, testing, and subsequent dissemination of our parenting work involving the Triple P system (Sanders & Kirby, 2014). This model guides both the QA and QC procedures used in the ongoing development of interventions. Iteration in the model is shown by the two double-headed arrows that guide program developers from theory building to eventual dissemination and implementation. The model is iterative in that each step builds on the previous step and incorporates the views of end users (practitioners and agencies) and consumers (parents and children) regarding the appropriateness, feasibility, cultural relevance, and usefulness of the intervention.


Figure 43.1 Iterative 10-step model for program design, evaluation, and dissemination.

Figure 43.1 Iterative 10-step model for program design, evaluation, and dissemination.

(p. 462)

As part of the iterative process program, developers need to be attuned to the changing ecological context within which the program would be deployed. The development process, which is outlined in Figure 43.1, may seem time consuming by service systems seeking to access programs rapidly. However, a balance is needed between meeting service system demands for programs that work with the need to develop a credible evidence base to justify the dissemination and scaling up of interventions (Winston & Jacobsohn, 2010). A clearly defined pragmatic framework facilitates program development, evaluation, and translation, enabling greater transparency and efficiency. The pressure to disseminate programs prematurely with insufficient evidence can do more harm than good and does a disservice to parents, children, and the community.

Building a Theoretical Basis for an Intervention

For interventions to work, they need to be built from solid theoretical foundations. These foundations include having a clear theoretical framework that informs the specific types of intervention procedures used and the development of component parts of the intervention. Although the most effective parenting interventions, including the Triple P system, evolved from a common social learning, cognitive-behavioral, and functional analysis framework, some programs also incorporate principles and procedures drawn from other theories, including attachment theory, developmental theory, and cognitive social learning and self-regulation theory and public health models of intervention (Sanders, 2012; Webster-Stratton, 1998).

Program Development and Design

Impetus for considering an adaptation of an existing program or new program can stem from a variety of sources, including epidemiological studies (where available) that help define the extent of the problem in populations of interest. A systematic review that identifies current prevention and treatment programs for the problem is useful to identify potentially modifiable protective and risk factors. Research on cultural diversity and implications of cultural differences relevant to a program help to identify implementation challenges in working with target groups (Morawska, Haslam, Milne, & Sanders, 2011). Consumer preference surveys can be used to garner information about challenges, concerns, needs, and preferences of target groups (Sanders, Baker, & Turner, 2012). Finally, several studies have used focus groups with the intended population of interest and professionals to help improve “the ecological fit” of a new program to the target population. A further QA step is to develop intervention manuals for use in pilot studies (Chambless & Ollendick, 2001). At this early stage, it is critical to reach agreement regarding authorship and to avoid subsequent disputes.

In our center, as the Triple P system of interventions is owned by The University of Queensland, all staff and students working on Triple P projects are required to assign copyright of any new Triple P program materials to the University of Queensland. This policy ensures that a program can be disseminated under an existing licensing and publication agreement between the university and a dissemination organization.

(p. 463) Initial Feasibility Testing and Program Refinement

Pilot Studies

Once an intervention protocol has been developed, and before it is subjected to further evaluation through randomized clinical trials, it is useful to pilot test the actual protocols, including all materials, with individual cases or, more formally, using controlled single-case or intrasubject replication designs (Baer, Wolf, & Risley, 1968). Initial feasibility testing is the first opportunity to apply QC procedures to the developed intervention. The advantage of this early QC step is that the likely effects of the intervention can be determined, including the extent to which change occurs on primary outcome measures, the timing of observed changes (rapid or gradual), and whether changes across different outcome variables are synchronous or desynchronous (Kazdin & Nock, 2003).

Pilot studies also afford program developers the opportunity to learn how the program is received by the end users (e.g., practitioners and agencies), as well as consumers (parents and children). This can be achieved through including focus groups or questionnaires as part of the pilot trial aimed at examining whether the program is deemed acceptable, culturally appropriate, usable, and useful. Furthermore, during the initial feasibility testing the developer is alerted to any implementation difficulties, including process issues, timing of program activities, consumer acceptability and appropriateness of materials, and the sequencing of within-session tasks and exercises (Kazdin & Nock, 2003).

Program Refinement

After the initial feasibility testing, the first opportunity arises for program developers to refine the program in light of the obtained results from the quantitative and qualitative feedback. This might require modifications to specific program content and delivery to assist with successful implementation. For example, the steps outlined in protocol adherence checklists might need to be further detailed to best measure program fidelity.

Efficacy Trials

Efficacy trials refer to the beneficial effects of a program under optimal conditions of delivery (Flay et al., 2005). After initial feasibility testing and program refinement, the developed program should then be evaluated in a randomized controlled trial, which is commonly implemented by the program developers—this is also referred to as the “proof-of-concept” phase (Valentine et al., 2011). The foundation trial should follow best practice guidelines such as those detailed by CONSORT (Consolidated Standards of Reporting Trials; Altman et al., 2001). Efficacy trials also permit the opportunity to examine the potential mediators, as well as the obtained behavioral outcomes of the intervention. In determining the impact that a certain variable (e.g., participation in a Triple P program) has on the outcome of interest (e.g., changes in parental behavior), it is important to examine not only the direct relationship between the two variables but also any mediation or moderation that occurs as a result of other variables, such as changes in parental stress, parental adjustment, or parental self-efficacy.

(p. 464) Effectiveness Trials

Programs that are disseminated need to be robust in terms of everyday service delivery circumstances (Hodge & Turner, 2016). Among prevention scientists, effectiveness trials refer to the effects a program achieves under real-world conditions (Flay et al., 2005). Effectiveness trials specifically permit the exploration of program outcomes when delivered as part of usual service delivery in community settings. Through effectiveness trials, programs can be assessed for their robustness, and implementation enablers and barriers can be identified (Flay et al., 2005). Effectiveness trials also provide an opportunity to conduct the first cost-effectiveness analysis on the program. Specific programs within the Triple P system of interventions have had numerous effectiveness, service-based, and cost-effectiveness–based evaluations, for example, the Level 4 Group Triple P program (e.g., Gallart & Matthey, 2005).

Program Refinement

Each time an intervention is evaluated, an opportunity is created to revise, reflect, or refine intervention protocols. It is rare for trial prototypes of clinical procedures not to require further refinement before wider dissemination (Chambless & Ollendick, 2001). This process involves soliciting feedback from clients and practitioners concerning their experience of the program, including the readability of any written material, the relevance and usefulness of examples used, the types of activities involved, and the appropriateness and authenticity of video material (Winston & Jacobsohn, 2010).

Scaling Up Interventions for Dissemination

Dissemination refers to the process of taking evidence-based interventions from the research laboratory and delivering them to the community (Sanders, 2012). Dissemination requires a well-developed set of consumer materials or resources (such as manuals, workbooks, and DVDs), as well as professional training programs to train practitioners to deliver the intervention. A common problem faced by program developers when scaling up an intervention is that it can be difficult to do so in a university context that prioritizes research and teaching. Consulting with a university’s technology transfer operation (TTO) to formalize an agreement concerning intellectual property rights and license arrangements with organizations (purveyors) with capacity to manage the dissemination process can overcome this obstacle. Depending on the agreement reached, the dissemination organization may then become responsible for the dissemination process, which includes publishing resources and materials, providing video production, delivering professional training, providing program consultation and technical support, and meeting QA standards and QC measures in the delivery of the intervention.

If a program developer does not have a dissemination partner, it can be helpful to contact the TTO. In many universities, the TTO can be a valuable business development resource in seeking potential partners and negotiating terms prior to agreements being put in place. The process can be time consuming, and a good TTO can provide business skills to compliment the academic scientific and clinical knowledge.

(p. 465) Universities differ widely in their level of coordination, resourcing, and focus on commercialization. Regardless, successful long-term relationships between program developers and purveyors requires a multidiscipline, commercial, strategic, and coordinated approach. From business development and due diligence activities, to tactical negotiation and drafting of complex legal agreements, as well as ongoing relationship and intellectual management, the pathway can be long and bumpy—and can lead to many dead ends before hitting on the desired destination.

Key characteristics of a suitable purveyor include the capability to undertake global business development, respect for adhering to and enforcing the fidelity requirements of developers, and adapting to changes in those requirements as required. A purveyor needs a commitment to a long-term relationship and access to additional investment to further the implementation and research agenda. The purveyor really needs an entrepreneurial approach. However, the relationship needs constant nurturing, not a “set-and-forget” approach. This involves continual maintenance, negotiation, and open communication. In most cases, a commercialization partner is embarking on a significant risk as it is likely that there is no existing well-established market for the intervention.

Determining the Costs and Benefits of Interventions

Cost-effectiveness analyses should be conducted before scaling up and translating evidence-based programs to systems (Little, 2010). Cost-effectiveness analyses can influence whether policymakers and other potential systems will adopt the program, as they need to know if investment in the program will have financial benefits for their constituency. Parenting interventions tend to fare well in cost–benefit analyses. For example, Aos et al. (2011) conducted a careful economic analysis of the costs and benefits of implementing the Triple P system using only indices of improvement on rates of child maltreatment (out-of-home placements and rates of abuse and neglect). Their findings showed that, for an estimated total intervention cost of $137 per family, if only 10% of parents received Triple P, there would be a positive benefit of $1,237 per participant, with a benefit-to-cost ratio of $9.22. The benefit-to-cost ratio would be even higher when higher rates of participation were modeled.

Cost–benefit considerations should also be incorporated at the program development stage. If the costs of developing and trialing a program are too high, the proceeds from the dissemination may never actually recover the initial investment costs.

Dissemination and Implementation

An implementation framework is needed to disseminate a program effectively. This includes engaging with systems and potential partners, developing contracts and commitments from partners to meet desired goals, developing the plan for implementation with the target system, and building training and accreditation days into the system. An implementation framework has been developed by purveyors for the Triple P system (McWilliam, Brown, Sanders, & Jones, 2016). The framework includes a range of specific tools for use by program consultants working with agencies to guide each stage of the implementation process (e.g., how to conduct line manager briefing, how to estimate population reach from different levels of investment in training).

(p. 466) In our efforts to disseminate Triple P internationally, we have always sought to encourage others to establish a local evidence base at the site where the program is being implemented. For example, we have collaborated with many institutions to identify interested and competent researchers to conduct local evaluations of specific programs of the Triple P system to help build a local evidence base. Sustainability is not only more likely with local evidence of impact but also strategic alliances can be built to increase the total pool of researchers around the world, contributing to the cumulative evidence base on parenting programs. Such an approach ensures that the program is responsive to local needs, fosters a spirit of openness and critical evaluation, and builds the local partnerships needed to sustain an intervention (Sanders, 2012). To maintain the local community of providers and researchers, it is useful to create links to the broader research community through international conferences (e.g., the Helping Families Change Conference—the biennial international conference for Triple P) and international networks (e.g., the Triple P Research Network, http://www.tprn.net) to further facilitate continued research collaborations and investments.

Potential Risks and Management of Risks

There are many potential risks that can occur across the program development cycle; however, two important areas that need attention are (a) managing developer-led and independently led evaluations and (b) managing conflicts of interest (COIs).

When building an evidence base for any field of research (e.g., parenting interventions) or specific programs within a field of research (e.g., Triple P), there is a complementary need for both developer-led and independently led evaluations. However, determining what constitutes a developer-led or independently led evaluation is more complicated than it appears. It is important to operationally define the two roles and then examine how the roles complement each other. Program developers are individuals who initiate the original idea and develop the program. Program developers may or may not own the program through a copyright agreement, a license agreement, or some form of intellectual property protection. It is common for employers to claim ownership of the copyright for a program. Developer-led research occurs when the program developers then evaluate the developed program (Sherman & Strang, 2009). Within the field of psychological intervention, program developers are most often involved in the early evaluation of interventions and provide the foundational evidence for the program (Sanders, 2015). In these early stages of evaluation, program developers must ensure that the evidence supporting the practice is reliable, robust, and transparent and is conducted in a manner that minimizes the potential for bias. Once proof-of-concept evidence is achieved (Valentine et al., 2011), program developers move toward replication research, and this is where the complementary process of independent evaluations is most valuable.

Independently led research is difficult to define. Independently led research occurs when the program developer is not involved in any stage of the research and is not an author on the subsequent publication. Typically, the research is conducted at an institution independent of the program developer. Many factors need to be considered when determining whether a study is independently led, including who conducted the study; where the study was conducted; who (p. 467) funded the study; who were the contributing authors and at which institution; who was responsible for the conceptual design of the study, measure selection, analysis, write-up, and interpretation of findings; and whether the developer or organization providing approved training of staff was consulted during the evaluation process.

Independent evaluations are important for several reasons, as they are a form of replication research. They help control for COI and some forms of bias and help identify issues or problems with program implementation. Commonly, independent evaluations may be conducted under more heterogeneous conditions, therefore providing a useful test of the robustness of the intervention effects (Sherman & Strang, 2009). One argument for independent evaluations pertains to the management of potential COIs, either financially or ideologically, that can occur when program developers are the leads on evaluation trials (Eisner, 2009). Therefore, it is important for evaluations that are developer led to include mechanisms designed to avoid or minimize bias. Such safeguards comprise, but are not limited to, including COI statements in publications, registering trials on clinical databases prospectively (e.g., ClinicalTrials.gov, https://clinicaltrials.gov/), publishing the prospective trial’s protocol in peer-reviewed journals, and having an open data repository where independent evaluators for systematic reviews or meta-analyses can have open access to original data.

Independent evaluations also have limitations. As with any study, erroneous conclusions occur when interventions are implemented with poor fidelity, when findings are selectively reported, when there is a failure to accurately report on the actual level of developer involvement, and when independent findings are themselves not replicated and at variance to other available studies. Furthermore, independent evaluations are not free from potential bias. Sherman and Strang (2009) outlined a variety of factors that could bias an independent evaluation, such as skepticism, financial or organization pressures to show that programs do or do not work, the evaluation predicting null findings or negative results of a program, whether the independent evaluation has an affiliation with a competing program, or the independent evaluation being corrupted by a desire to disprove the value of a respected or popular program. While one form of evaluation is not considered “better” than the other, both need to safeguard against bias.

The Management of Conflict-of-Interest Risks

In an attempt to manage the differing types of potential COI risks that can occur during the QA process of program development, Sanders (2015) developed a potential COI checklist for program developers, which is outlined in Table 43.1. A COI might exist when an author, or institution employing an author, has a financial relationship or otherwise with individuals or organizations that could influence the author’s work inappropriately. Examples of potential COIs include, but are not limited to, academic, personal, or political relationships; employment, consultancies, or honoraria; and financial connections such as stock, royalties, or funding of research. COIs might occur in situations where an investigator has a significant financial or other interest (e.g., employment) that might compromise, or have the appearance of compromising, professional judgment in the design, conduct, or reporting of research. There are several forms of COI for individuals involved with research and dissemination (Table 43.1).

Table 43.1: Types of Potential Conflicts of Interest

Potential Conflict

Reasons for Potential Conflict

Authorial contribution to program resources

Authors of program resources that are disseminated by a purveyor company may benefit financially through receipt of royalties from sales of published program resources. This benefit could be current or anticipated if the program is deemed worthy of dissemination. Contributory authors may be university staff members, students, or collaborators from another university or research center working with a researcher or research team member. Another potential COI for authors is the avoidance of reputation damage if an intervention study fails.

Nonauthorial staff member or student

When an academic institution owns the copyright of a program, there could be a perception that staff and students affiliated with the institution may benefit indirectly from published research relating to the program. These benefits could include reduced resource costs due to discounts on the purchase of program materials, reduced training cost for research personnel, or increased opportunities to present at conferences or conduct workshops on an aspect of the program.

Consultant, trainer, or staff member of the training organization or one of its subsidiaries

A purveyor dissemination company may be licensed by the university to disseminate the program. A COI might exist if a consultant, trainer, or staff member employed by the training organization or one of its subsidiaries contributes to a research paper evaluating the program.

Member of a dedicated research network or interest group

Some programs have established international research networks or coalitions to promote quality research concerning the intervention. A perception may exist that membership in the network creates a COI even if no funds are provided by the network to support the individual’s work.

Other authors contributing to research papers on the intervention

An independent researcher with no affiliation to the university or the training organization may also have a conflict of interest related to a bias or theoretical allegiance to an alternative program or intervention paradigm.

(p. 468)

The Parenting and Family Support Centre (PFSC) at The University of Queensland has developed a COI management plan that aims to manage COIs and make potential COIs as transparent as possible at each stage of the program development process. Moreover, a collaboration between program developers from the Australian psychosocial science field to develop clear guidelines to manage COIs is currently under way. We encourage other program developers in the field of psychosocial research to reflect on their QA and QC processes, liaise with their university integrity research units, and develop tailored COI management plans.

(p. 469) The Role of Critical Appraisal and Ongoing Innovation

Ongoing research and evaluation means that no programs can rest on their laurels (Winston & Jacobsohn, 2010). The impetus for changing a program comes with evidence showing inadequate outcomes with a specific client group, feedback from practitioners or parents as consumers, and cross-fertilization from one area of research to another. This critical analytic approach is a dynamic process that should constantly strive for self-improvement. A single or indeed several well-conducted studies will never be the final word on the effects of a program.

Implications and Conclusion

A major implication of QA issues is that developers working in research settings need to become more focused on the end user throughout program development and evaluation. If programs are to survive and flourish over time, there needs to be constant evolution and investment in research and development. Without such investment, programs become stale, are seen to be irrelevant to the modern generation of parents, or are seen to apply concepts and procedures that fail to reflect advances in knowledge relevant to understanding specific problems or client populations. On the other hand, a vibrant, thriving research and development group, working in collaboration with others, can create outstanding programs that have great potential to benefit (p. 470) children, families, and society for years ahead. In the case of Triple P, QA processes have gradually evolved and now inform all current and future research and development of its system in a continuous ongoing process that is designed to enable new generations of developers, researchers, students, and consumers to contribute to the program (Box 43.1 for an example of the Triple P QA process in action).

Key Messages

  • Programs need to continually innovate to stay relevant to consumers in terms of both content and delivery.

  • Constant intervention evaluation of outcomes and acceptability is needed to inform program refinements.

  • Intervention developers can benefit from a QA framework to assist with innovation development.

References

Altman, D. G., Schulz, K. F., Moher, D., Egger, M., Davidoff, F., Elbourne, D., . . . CONSORT Group. (2001). The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine, 134, 663–694. doi:10.7326/0003-4819-134-8-200104170-00012Find this resource:

Aos, S., Lee, S., Drake, E., Pennuci, A., Klima, T., Miller, M., . . . Burley, M. (2011). Return on investment: Evidence-based options to improves statewide outcomes (Document No. 11-07-1201). Olympia: Washington State Institute of Public Policy.Find this resource:

    Axford, N., & Morpeth, L. (2013). Evidence-based programs in children’s services: A critical appraisal. Children and Youth Services Review, 35, 268–277. doi:10.1016/j.childyouth.2012.10.017Find this resource:

    Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Annals, 1, 91–97. doi:10.1901/jaba.1968.1-91Find this resource:

    Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685–716. doi:10.1146/annurev.psych.52.1.685Find this resource:

    Crosby, P. B. (1984). Quality without tears. New York, NY: McGraw-Hill.Find this resource:

      Eisner M. (2009). No effects in independent prevention trials: can we reject the cynical view? Journal of Experimental Criminology, 5, 163–183.Find this resource:

      Flay, B. R., Biglan, A., Boruch, R. F., González Castro, F., Gottfredson, D., Kellam, S., . . . Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151–175. doi:10.1007/s11121-005-5553-yFind this resource:

      Gallart, S. C., & Matthey, S. (2005). The effectiveness of Group Triple P and the impact of the four telephone contacts. Behaviour Change, 22, 71–80. doi:10.1375/bech.2005.22.2.71Find this resource:

      Hodge, L., & Turner, K. M. T. (2016). Sustained implementation of evidence-based programs in disadvantaged communities: A conceptual framework of supporting factors. American Journal of Community Psychology, 58, 192–210. doi:10.1002/ajcp.12082Find this resource:

      Kazdin, A. E., & Nock, M. K. (2003). Delineating mechanisms of change in child and adolescent therapy: Methodological issues and research recommendations. Journal of Child Psychology and Psychiatry, 44, 1116–1129. doi:10.1111/1469-7610.00195/ (p. 471) Find this resource:

      Little, M. (2010). Looked after children: Can existing services ever succeed? Adoption and Fostering Journal, 34, 3–7.Find this resource:

      McWilliam, J., Brown, J., Sanders, M. R., & Jones, L. (2016). The Triple P Implementation Framework: the Role of Purveyors in the Implementation and Sustainability of Evidence-Based Programs. Prevention Science, 17, 636-645. doi: 10.1007/s11121-016-0661-4Find this resource:

      Morawska, A., Haslam, D., Milne, D., & Sanders, M. R. (2011). Evaluation of a brief parenting discussion group for parents of young children. Journal of Developmental and Behavioral Pediatrics, 32, 136–145. doi:10.1097/DBP.0b013e3181f17a28Find this resource:

      Sanders, M. R. (2012). Development, evaluation, and multinational dissemination of the Triple P—Positive Parenting Program. Annual Review of Clinical Psychology, 8, 1–35. doi:10.1146/annurev-clinpsy-032511-143104Find this resource:

      Sanders, M. R. (2015). Management of conflict of interest in psychosocial research on parenting and family interventions. Journal of Child and Family Studies, 24, 832–841.doi 10.1007/s10826-015-0127-5Find this resource:

      Sanders, M. R., Baker, S., & Turner, K. M. T. (2012). A randomized controlled trial evaluating the efficacy of Triple P Online with parents of children with early-onset conduct problems. Behaviour Research and Therapy, 50, 675–684. doi:10.1016/j.brat.2012.07.004Find this resource:

      Sanders, M. R., & Kirby, J. N. (2014). Surviving or thriving: Quality assurance mechanisms to promote innovation in the development of evidence-based parenting interventions. Prevention Science, 16, 421–431. doi:10.1007/s11121-014-0475-1Find this resource:

      Sherman, L. W., & Strang, H. (2009). Testing for analysts’ bias in crime prevention experiments: Can we accept Eisner’s one-tailed test? Journal of Experimental Criminology, 5, 185–200. doi:10.1007/ s11292-009-9073-9Find this resource:

      Stein, Z., & Heikkinen, K. (2009). Models, metrics, and measurements in developmental psychology. Integral Review, 5, 4–24. doi:10.1.1.533.3316Find this resource:

        Valentine, J. C., Biglan, A., Boruch, R. F., Castro, F. G., Collins, L. M., Flay, B. R., & Schinke, S. P. (2011). Replication in prevention science. Prevention Science, 12, 103–117. doi:10.1007/s11121-011-0217-6Find this resource:

        Webster-Stratton, C. (1998). Preventing conduct problems in Head Start children: Strengthening parenting competencies. Journal of Consulting and Clinical Psychology, 66, 715–730. doi:10.1037/ 0022-006X.66.5.715Find this resource:

        Winston, F. K., & Jacobsohn, L. (2010). A practical approach for applying best practices in behavioural interventions to injury prevention. Injury Prevention, 16, 107–112. doi:10.1136/ip. 2009.021972aFind this resource: