Evaluating the Relevance, Generalization, and Applicability of Research

  title={Evaluating the Relevance, Generalization, and Applicability of Research},
  author={L. W. Green and Russell E. Glasgow},
  journal={Evaluation \& the Health Professions},
  pages={126 - 153}
Starting with the proposition that “if we want more evidence-based practice, we need more practice-based evidence,” this article (a) offers questions and guides that practitioners, program planners, and policy makers can use to determine the applicability of evidence to situations and populations other than those in which the evidence was produced (generalizability), (b) suggests criteria that reviewers can use to evaluate external validity and potential for generalization, and (c) recommends… 

Figures and Tables from this paper

Making research relevant: if it is an evidence-based practice, where's the practice-based evidence?
An examination of the pipeline looks upstream for ways in which the research itself is rendered increasingly irrelevant to the circumstances of practice by the process of vetting the research before it can qualify for inclusion in systematic reviews and the practice guidelines derived from them.
Generalizing about Public Health Interventions: A Mixed-Methods Approach to External Validity.
  • L. Leviton
  • Medicine
    Annual review of public health
  • 2017
A review of methods and how they might be combined to better assess external validity of evidence-based interventions (EBIs) and how to expand causal generalizations.
Revisiting concepts of evidence in implementation science
It is suggested that funders and reviewers of research should adopt and support a more robust definition of evidence and see capacity as a necessary ingredient to shift the field’s approach to evidence.
Assessing and Strengthening Evidence-Based Program Registries’ Usefulness for Social Service Program Replication and Adaptation
Evidence-based program registries provide insufficient information to guide context-sensitive decision making about program replication and adaptation, and should supplement their evidence base with nonexperimental evaluations and revise their methodological screens and synthesis-writing protocols.
Commentary: Generating rigorous evidence for public health: the need for new thinking to improve research and practice.
The three reviews in this symposium seek to compare and contrast several evaluation designs that are alternatives to the randomized controlled trial, and provide an overview of the value and approaches for generating practice-based evidence.
Understanding the value of adhering to or adapting evidence-based interventions: a study protocol of a discrete choice experiment
This project will offer unique insights into decision-making processes that influence how EBIs are used in practice, needed for a more granular understanding of how practitioners manage the fidelity–adaptation dilemma and thus, ultimately, how the value of EBI implementation can be optimized.
Moving from efficacy to effectiveness trials in prevention research.
Implementation science and its application to population health.
For implementation science to reach its full potential to improve population health the existing paradigm for how scientists create evidence, prioritize publications, and synthesize research needs to shift toward greater stakeholder input and improved reporting on external validity.
Practice to Evidence: Using Evaluability Assessment to Generate Practice-Based Evidence in Rural South Georgia
Evaluability assessment can identify programs most likely to produce useful results for dissemination and is a viable approach for local initiatives to generate practice-based evidence in rural or low-resource settings.


The challenges of systematically reviewing public health interventions.
Many of the issues faced when reviewing evidence of public health intervention effectiveness are outlined, including how to identify, select and critically appraise relevant research and to collect and analyse data from the studies included in the review.
Interventions That are CURRES: Cost-Effective, Useful, Realistic, Robust, Evolving, and Sustainable
The goals are to review the existing norms regarding “good” interventions, suggest alternative criteria, and examine the implications of the new criteria for the conduct of intervention research.
Developing and using the Guide to Community Preventive Services: lessons learned about evidence-based public health.
The Guide to Community Preventive Services (Community Guide) identifies promising interventions that have not been adequately researched, thus helping to inform the public health research agenda.
Efficacy, effectiveness, variations, and quality. Boundary-crossing research.
The aim of this work is to consider future directions for quality assessment research and the uses to which its products should be put, and that is the main focus of this article.
Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues.
This work proposes and discusses a series of composite metrics that combine two or more RE-AIM dimensions, and can be used to estimate overall intervention impact, and offers potential to help identify interventions most likely to meaningfully impact population health.
Beginning with the application in mind: Designing and planning health behavior change interventions to enhance dissemination
A framework of reach, efficacy/effectiveness, adoption, implementation, and maintenance known as RE-AIM is described and how it can be used to plan and design studies with features that can strengthen the potential translation of interventions.
Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy.
Increasing the supply of pragmatic or practical clinical trials will depend on the development of a mechanism to establish priorities for these studies, significant expansion of an infrastructure to conduct clinical research within the health care delivery system, more reliance on high-quality evidence by health care decision makers, and a substantial increase in public and private funding forThese studies.
TREND: an important step, but not enough.
Reporting the TREND criteria will improve the quality of the literature, but additional criteria related to external validity are also needed to address issues critically important for the translation of research to practice.