- © 2007 Canadian Medical Association
At the fourth meeting of the Guideline International Network, held in Toronto in August 2007, world experts from 31 countries met to discuss the challenges and innovations afforded by clinical practice guidelines.1 Although each country faces unique local challenges to the implementation of effective health care, members of the guideline community have repeatedly and generously shared their solutions to these challenges, many of which are applicable in other countries. Nevertheless, it can be difficult for the local implementer with limited resources to harness this knowledge.
The Canadian Medical Association has long been a champion of enhancing the quality of clinical practice guidelines.2 The objective of the recently published Canadian Medical Association Handbook on Clinical Practice Guidelines3 (Box 1, Table 1) is to gather up-to-date, evidence-based, experience-driven guidance on how to use guidelines most effectively to improve patient care. This new handbook combines and updates the 1994 and 1997 documents “Guidelines for Canadian Clinical Practice Guidelines”4 and “Implementing Clinical Practice Guidelines: a Handbook for Practitioners.”5 The new handbook places the role of guidelines in health care into perspective, outlining where they are most useful. It helps the reader to decide if an existing guideline can be adapted or if a new guideline should be created, providing resources for both scenarios. The handbook also reviews evidence to guide those responsible for implementing recommendations through the bewildering array of available implementation strategies. In recognition of increasing demands for accountability and increasing emphasis on quality of care, the final chapter reviews the process of evaluating the effectiveness of guidelines.3
The handbook reviews in depth the methodologic steps in the guideline process, including the biases inherent in guideline development, the struggle to write accurate recommendations and an approach to implementation.
Readers of the new handbook will be able to use it in 3 ways: as a review for the key parts of the process, with reference to the published evidence; as a source of practical approaches to complete the part of the cycle in which the user is involved; and as an illustrative inventory of resources and links. Guideline developers and implementers may be health care practitioners, administrators, health organizations or policy-makers. Experienced guideline developers and implementers may find within the handbook innovations from the international community that apply to their work. For guideline users, knowledge of what makes a guideline “good” allows selection of the best guidelines, saving time and potentially improving patient outcomes. It is our hope that the handbook may even inspire a few guideline users to become local champions of best practices.
Dealing with bias in guideline development
One of the significant changes in the field of guideline development has been widespread acceptance of a standardized methodology for the production of clinical practice guidelines. Just as the writing of randomized controlled trials has been informed by the CONSORT (Consolidated Standards of Reporting Trials) statement6 and the writing of systematic reviews improved by the QUORUM (Quality of Reporting of Meta-analyses) statement,7 so the AGREE (Appraisal of Guidelines Research and Evaluation) Collaboration has promoted validated criteria for writing clinical practice guidelines (www.agreecollaboration.org). The elements of the AGREE instrument are summarized in Table 2.
Each domain in the AGREE instrument reflects either a potential source of bias or issues related to clarity and understanding of the guideline. In trials, investigators aim to minimize bias to get closer to the “truth”: the same experiment conducted repeatedly should yield the same results. In clinical practice guidelines, minimizing bias should likewise lead various groups of developers who consider the same evidence to come up with similar recommendations. Sources of bias in the evidence-review process include the type of literature search used, the method used to evaluate the quality of the literature, and editorial independence.
As for any systematic review, it is vital that a systematic search strategy be used in the development of clinical practice guidelines. Relying on experts' recollection of the literature, as was common with consensus guidelines, is no longer sufficient. For example, Gilbert and associates8 compared historical recommendations with a systematic review of observational studies of the effect of infant sleeping position on sudden infant death syndrome. They found that by 1970, the literature demonstrated a statistically significantly increased risk of sudden infant death for sleeping on the front relative to sleeping on the back (pooled odds ratio 2.93, 95% confidence interval 1.15–7.47); however, guidelines did not consistently recommend the back-sleeping position until 1992. These authors concluded that use of systematic review techniques could have led to earlier recognition of the risks of sleeping on the front and might have prevented more than 10 000 infant deaths in the United Kingdom and at least 50 000 in Europe, the United States and Australasia.8 The recommendations in any clinical practice guideline should consider the results of the totality of the literature, giving greater weight to better-designed studies. One recent study9 found that applying 2 different quality-evaluation methods (Cochrane or best-evidence synthesis) led to different recommendations. Listing a level of evidence for each recommendation forces the guideline developer to identify the strength of the evidence supporting the statement.
Editorial independence, the sixth and last domain in the AGREE instrument (Table 2), asks the reader to evaluate conflict of interest. Financial conflict of interest has been the type of bias most widely discussed. One study of clinical practice guidelines published between 1991 and 1999 found that 87% of guideline authors had interactions with the pharmaceutical industry, 58% had received financial support for research, and 38% had been employees of or consultants for a pharmaceutical company.10 To score well in this domain of the AGREE instrument, the guideline must state not only that all group members have declared whether they have any conflict of interest (using standard forms), but also that the views or interests of the funding body have not influenced the final recommendations. The American College of Chest Physicians guards against this type of bias by disallowing participation of any guideline panel member who does not complete a conflict of interest disclosure form and by careful review of each disclosure form. Conflicts declared by panel members are reviewed using a graded consideration based on the potential level of conflict, whether the conflict can be managed within established parameters and whether the panel member has expertise that would allow participation in a related area that does not involve the conflict. The disclosures that prove most difficult to evaluate receive full committee review.11
Many journals now refuse to publish clinical practice guidelines unless statements of conflict of interest are available, and many guideline developers ask participants in the guideline development process to fill out standard conflict of interest forms. A description of how potential conflicts have been addressed is often lacking, however. Given that 7% of the authors surveyed by Choudhry and associates10 stated that their own relationships influenced recommendations, and 19% thought that their coauthors' recommendations were influenced, this form of bias remains a significant challenge to the reliability of clinical practice guidelines. Other potential sources of bias also exist, such as long-term service to government committees or private insurers, participants' previously established “stake” in an issue, the way in which developers make their living and personal experiences.12
Developing recommendations: grading systems
Many guideline developers provide a legend to explain how they came to each of the recommendations. The strength of recommendations is often categorized according to a specific “grading system,” which usually considers only levels of evidence but sometimes also addresses other factors that might influence the strength of the recommendation, such as the magnitude of the therapeutic risk reduction and the magnitude of potential harms and benefits for possible outcomes. The GRADE (Grading of Recommendations, Assessment, Development and Evaluation) system is an international effort to standardize the approach to making recommendations (www.gradeworkinggroup.org). The GRADE approach assigns evidence “quality” at 1 of 4 levels — very low, low, moderate or high — on the basis of specific criteria. Each recommendation is then based on a judgment of net benefits, including whether the net benefits are positive, negative or uncertain. This and other grading schemes13,14 that are less explicit but still provide transparency are compared in Table 3. In 2003, the GRADE Working Group itself acknowledged that there is no published evidence on how best to communicate grades of evidence and recommendations.15
Guideline developers often modify grading systems to reflect their specific needs. The US Preventive Services Task Force, an active participant in the GRADE Working Group, has chosen to maintain its own recommendation grading system16 to reflect this organization's more narrow focus on prevention, in contrast to the broader clinical scope of the GRADE Working Group. Until sufficient evidence accumulates to demonstrate the superiority of one system over another, concentrating on making recommendations clear and reflective of the evidence is a reasonable approach for most guideline developers.
Implementation of guidelines
In the end, even well-designed guidelines in the same area will occasionally differ in their recommendations. Developers should therefore consider, during the development phase of any guideline, its ease of implementation (implementability). The recent GLIA (GuideLine Implementability Appraisal) instrument takes the developer through a series of validated questions that ask about factors known to predict the relative ease of implementation of guideline recommendations.17 The currently recommended approach to implementation is summarized in Box 2. Implementation strategies may be most effective when they are targeted to locally identified facilitators and barriers to implementation. Barriers may be effectively identified through a process as simple as structured reflection by the implementation group.18 Many implementation strategies have shown modest benefit, and multiple strategies often work better than single ones.19 Although the sheer number of possible implementation strategies precludes their description here, the interested reader is directed to the handbook,3 which reviews the major implementation strategies that have been assessed in the literature.
Other guideline manuals
Many guideline development organizations have manuals outlining their methods for interested readers.13,20,21 For users who do not have the resources to develop their own clinical practice guidelines, help is available for finding, evaluating and adapting existing guidelines for local use (Box 3). Other groups have provided guides for implementation and chart-type tools to guide practice (Box 3). For those wanting to test how well their guideline has worked, evaluation strategies are harder to find: usually the process of designing and performing a guideline evaluation involves a review of primary studies of guideline evaluation to determine which evaluation strategy would be most suitable.22
Other organizations that offer extensive English-language collections of resources and references include Australia's federal guideline agency, the New Zealand Guidelines Group, the Scottish Intercollegiate Guidelines Network and the UK National Institute for Health and Clinical Excellence (Box 3). The Guidelines International Network provides a wealth of international resources (Box 3). To our knowledge, until now, no Canadian organization has brought all these resources together in a single document, but these are all listed in the Canadian Medical Association Handbook on Clinical Practice Guidelines.3
What the future holds for guidelines and the handbook
Current research related to clinical practice guidelines includes studies of the role of patient involvement in guideline development, the validation of tools to enhance the implementability and evaluation of guidelines, and examination of the balance between studying and implementing guidelines. As the field of guideline methodology matures, developers will also struggle with providing guidance in the context of multidisciplinary approaches to care and with addressing the needs of patients who have multiple chronic conditions.
The new handbook, like clinical practice guidelines themselves, should be considered a living document, responsive to changes in the literature and feedback from guideline implementers. It will therefore need regular updating to incorporate advances in knowledge about clinical practice guidelines. We welcome any comments that readers of this article may have.
Footnotes
-
Contributors: All of the authors made substantial contributions to the content and framework of the article, revised it critically for important intellectual content and provided final approval of the version to be published.
Competing interests: None declared.