In 2012, the Accreditation Council for Graduate Medical Education (ACGME) mandated that training programs “engage residents in patient safety and quality improvement [QI] activities” and created the Clinical Learning and Environment Review, a national program to oversee efforts to engage residents in these and other key areas.1-3 Subsequently, many discussions among medical educators have centered on how to integrate training in patient safety and QI into residency programs.4,5 Most current QI curricula are delivered in the form of didactic lectures or hands-on clinical QI projects at the local hospital level.6
The increasing availability of “big data” generated through electronic health records, large health care outcomes databases, and so on, has created a new and potentially complementary approach to engage surgical residents, in addition to the more common approach of classroom lectures or other clinical didactics about quality and patient safety. The value of big data in clinical care and in research has been well documented, but the application of these modern, evolving tools to GME training has received far less attention.7,8 This column summarizes some of the important skills and lessons these new tools offer to GME.
Ownership over QI and decision making
Accessing and studying big data can increase residents’ sense of ownership over the development and exploration of a QI hypothesis and provide important insights into the decision-making processes that underlie QI priorities. Residents may be asked to select a QI project from a list of existing project ideas, but they often are not privy to the internal organizational decisions that lead to the prioritization of one QI effort over another—they are only tasked with designing, implementing, and studying a QI intervention. Reviewing pertinent literature may improve the understanding of the quality problem at hand, but without access to the initial data, residents may not appreciate the assumptions and calculations used to frame the problems. Providing access to large data sources and the training necessary to use them shifts ownership of a QI question to the individuals designing the interventions.
For example, when asked to investigate readmission to their own hospitals, residents using big data may first ask, “What is the scale of the problem that I am considering? Is it a problem of variation across practitioners, hospitals, or regions? If so, are there indicators that suggest what might correlate with this variation, and are any of the covariates potentially modifiable in a way that improves quality broadly?” Residents engaged in big data QI research will learn the decision-making processes for initial data abstraction and analysis necessary to refine QI questions and to propose meaningful solutions. This training will enhance the ability of future physician leaders to further prioritize and advocate for the collection of the granular data necessary to test QI hypotheses and strategic improvement initiatives. The Quality In-Training Initiative, launched by the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP®), is an example of a step in this direction.9
Gaining perspective
By providing population-level perspectives, big data can also be used to identify new targets for QI, as well as elucidate potential limitations. Until the past decade, most health care quality and outcomes research was confined to single institutions or small multi-institutional series, limiting both their inferential power and generalizability.
However, with the availability of large health care datasets—such as those found in ACS NSQIP, the Healthcare Cost and Utilization Project’s Nationwide Inpatient Sample (NIS), as well as specialty-specific registries—we can now aggregate data across large numbers of health care organizations to identify new opportunities for QI. For example, the Cystic Fibrosis Foundation’s national registry has been used to identify outliers in care, prompting some of the best health care institutions to identify potential areas of improvement.10 In much the same way as the appeal of fine art can be appreciated at a slight distance—if the viewer stands too close, all they might see are the brushstrokes and miss the point of the art work—so too can health care quality be more fully appreciated when viewed on a broader scale.
The population-level perspectives gained from big data also lend important insights into the potential limitations of QI work. An important concept in QI is the observed-to-expected (O/E) ratio. Although health care professionals aim to prevent all patient harm, the limitations in scientific knowledge and the often unpredictable nature of disease and treatment suggest that a certain percentage of imperfect outcomes may be unavoidable. Big data provides a lens through which we can better appreciate and develop evidence-based standards, allowing us to quantify the degree to which a particular outcome may have been avoidable. This concept of “expected outcomes”—or essentially a principle of “acceptability of negative outcomes”—is difficult to appreciate at the local hospital level. We can identify poor outcomes and use root-cause analyses to determine why they occurred, but it is difficult to know which outcome was “expected” for a given case. By identifying and comparing “apples to apples” through such processes as risk adjustment and propensity score matching, residents can begin to appreciate the concept of acceptability behind the O/E ratios.
A better understanding of this acceptability principle can have important systemic implications in helping to increase the national acceptance of QI policies. For instance, the goal of a recent Centers for Medicare & Medicaid Services (CMS) policy to penalize hospitals for readmissions is sometimes misinterpreted as CMS wanting to penalize all hospitals for all readmissions, when this policy will only target hospitals with readmissions that are “higher than expected,” a metric that can only be discerned when big data is analyzed.
Generating new ideas for interventions
By analyzing different physician practice patterns, care pathways, and hospital organizational structures across hundreds or even thousands of different surgeons, hospitals, and regions of the country, big data may provide novel ideas for QI initiatives that extend beyond patient selection or technical refinement issues. For example, surgical outcomes have been shown to vary widely between surgeons with different experience levels, or between surgeons from different subspecialties. At the local level, QI efforts may focus on studying the differences in technique or resource utilization. However, when examined across hospitals or across regions of the country through the lens of big data, new system-level interventions, such as new care pathways or maintenance of competency strategies, may emerge as additional approaches to improving patient outcomes.
Promoting rigorous evaluation
A successful QI initiative must include an evaluation component, and big data provides a platform to teach trainees about data-driven QI process evaluation. Important concepts such as risk adjustment and O/E ratios are best taught with actual data. Residents can learn to identify and capture data that are necessary for project evaluation, design evidence-based evaluation strategies, and critically analyze and interpret the data. This process is a logical extension of the evidence-based medicine concept into QI. Rather than assuming that a given QI intervention will work, big data enables and encourages the trainee to rigorously evaluate the data both before and after the intervention to determine whether it did or could work.
Conclusion
Big data has important implications for the tripartite mission of academic medical centers to support excellent clinical care, research, and education. We should recognize the value that big data offers for our educational mission and design specific offerings within the context of our education programs to allow trainees to learn about big data research tools and their applications to QI and patient safety. Though big data and the tools used in their systematic analysis have historically been reserved for dedicated health services researchers, they are increasingly available to our trainees and warrant increased attention as a means to engage learners in lifelong QI endeavors, which our nation’s health care system so desperately needs.
References
- Accreditation Council for Graduate Medical Education. Bylaws. Article II, Section 2b. P1. Effective July 1, 2013. Available at: www.acgme.org/acgmeweb/tabid/171/GraduateMedicalEducation/Policies.aspx. Accessed October 14, 2014.
- Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366(11):1051-1056.
- Accreditation Council for Graduate Medical Education. Policies and procedures. Subject 16.00, Sections 16.10-16.20. P68-69. Approved 9.27.2014. Available at: www.acgme.org/acgmeweb/tabid/171/GraduateMedicalEducation/Policies.aspx. Accessed October 14, 2014.
- Myers JS, Nash DB. Graduate medical education’s new focus on resident engagement in quality and safety: Will it transform the culture of teaching hospitals? Acad Med. 2014;89(10):1328-1330.
- Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: The foundation of graduate medical education. JAMA. 2013;309(16):1687-1688.
- Wong BM, Etchells EE, Kuper A, Levinson W, Shojania KG. Teaching quality improvement and patient safety to trainees: A systematic review. Acad Med. 2010;85(9):1425-1439.
- Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309(13):1351-1352.
- Jain SH, Rosenblatt M, Duke J. Is big data the new frontier for academic-industry collaboration? JAMA. 2014;311(21):2171-2172.
- Sellers MM, Reinke CE, Kreider S, et al. American College of Surgeons NSQIP: Quality in-training initiative pilot study. J Am Coll Surg. 2013;217(5):827-832.
- Greenwood V. Can big data tell us what clinical trials don’t? New York Times. October 3, 2014. Available at: www.nytimes.com/2014/10/05/magazine/can-big-data-tell-us-what-clinical-trials-dont.html?_r=0. Accessed May 12, 2015.