Unsupported Browser
The American College of Surgeons website is not compatible with Internet Explorer 11, IE 11. For the best experience please update your browser.
Menu
Become a member and receive career-enhancing benefits

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

Become a Member
Become a member and receive career-enhancing benefits

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

Membership Benefits
ACS
Bulletin

The Journey From End Results to the National Cancer Database

Heidi Nelson, MD, FACS, David P. Winchester, MD, FACS and Ryan M. McCabe, PhD

January 5, 2022

“I often say that…when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind….”
–Lord Kelvin1

In this issue of the Bulletin, we review the history of the American College of Surgeons (ACS) Commission on Cancer (CoC), including a description of the people and leaders who delivered on the promise set forth by the founders, the member organizations, and partners who shared and supported the vision, and the work that was done through standards and record keeping to improve the lives of people with cancer. A critically important part of this 10-decade retrospective is to not only tell the story of how CoC standards and accreditation evolved, but also describe how records and registries transformed to become the National Cancer Database (NCDB). No story about the CoC would be complete without a look at how the founders’ vision of patient follow-up, standardization, and record keeping became a powerful research engine—the NCDB, which today is a repository of information that continues to fuel new knowledge and improve quality of care.

A consistent theme throughout the past 100 years of the CoC was the concept of standardization, whether it is applied to patient care, record keeping, coding, or reporting of cancer results. Another consistent theme throughout the years was that of adaptability. As highlighted in the history of the CoC, adaptability was clear and evident as the six basic standards transitioned over time to become 38 diverse standards that cover all phases of cancer care—from prevention and screening to survivorship and surveillance.

The emphasis on standardization also was central to the story of records and registries. Patient follow-up and continuous adaptation to new forms of record keeping and technologies moved cancer information from paper abstracts filed at local hospitals to a national and computerized database.

In this article, we summarize the journey of CoC standardization, end results, and technologic adaptation. Furthermore, we explain how these adaptations and the eventual creation of the NCDB led to the continuous improvement of cancer care and how the CoC is poised to take quality improvement to the next level.

Early beginnings: Records and registries

The Committee on the Treatment of Malignant Diseases

The ACS Committee on the Treatment of Malignant Diseases (COTMD) was established in 1922 with a charge from the ACS Board of Regents to focus on the translation of contemporary knowledge into practice through the implementation of cancer standards that would ensure the optimization of cancer care and outcomes.2 It should go without saying that to realize optimal care and outcomes would require the availability of outcomes data. It was unsurprising, therefore, that the original framers of the COTMD included within the Minimum Standard for Cancer Clinics the requirement for cancer records inclusive of history, examinations, treatments, and, importantly, periodic follow-up examinations for up to five years. The early adoption of records as a manner of monitoring care outcomes likely derived from the early ACS experience with the Registry of Bone Sarcoma.

The first registry

Ernest Amory Codman, MD, FACS, as Chairman of the Committee on Registry of Bone Sarcoma, is credited with creating the first ACS cancer registry in 1921. Dr. Codman presented at Clinical Congress 1923 on The Registry of Bone Sarcoma as an Example of the End-Result Idea in Hospital Organization, stating,

"In the first place, what is the end-result idea? It is that every hospital should trace each patient with the object of ascertaining whether the maximum benefit has been obtained and to find out if not, why not…. Having ascertained the results of the treatment of each patient, the existing organization need not be changed until a more promising one is thought out. The end-result idea merely demands that the results shall be constantly analyzed and possible methods of improvement constantly considered."3

The end-result framework and the data fields required to make a tumor registry meaningful had been proposed, penned, and were now ready for implementation.

The first study

The first recorded clinical study of cancer was an assignment from the Board of Regents to the Cancer Campaign Committee in 1913 to analyze case records of patients with uterine or cervical cancer who were considered “cured” three years after treatment. In 1924, the committee report to the Board of Regents stated that the study showed “that surgery and radiation therapy were equally effective for early-stage disease of the uterine cervix and that palliation and survival for advanced stage disease were improved by radiation therapy.”4

By 1976, the CoC introduced the first national patient care evaluation study, examining the relationship between estrogen use and primary tumors of the liver, and by 1994, 20 patient care studies had been conducted in 17 primary cancer sites, involving more than 500,00 analytic cases and resulting in 75 published articles.4

The first “records” standard

Likely influenced by the Registry of Bone Sarcoma and methodologies set forth in the uterine-cervix study, the first official standard for records was published in the 1931 ACS Bulletin as part of the Minimum Standard for Cancer Clinics (see Table 1). It was an expectation that in addition to producing records, which are required in every approved general hospital, cancer-specific records should be produced. Cancer records were to include details pertaining to the history, physical examination, treatment, and periodic examinations of patients for at least five years after treatment. The stage was well set for what was to follow—in essence, the evolution from local hospital tumor record keeping to national standardization of nomenclature and coding, with the eventuality of a national computerized system of collecting and aggregating data for reporting and research.

TABLE 1. CoC Standards on records, registries, and coding
TABLE 1. CoC Standards on records, registries, and coding

Standardization of records, data, coding, and registry requirements

The importance of standardizing patient and cancer information was clear from the beginning of the CoC in order to inform a system for properly collecting, analyzing, comparing, and contrasting end results (see Table 1, for an overview).

Manual for Cancer Programs

Although record-keeping standards were put forth in 1931, procedures essential for hosting registries were not delineated until the 1953 Manual for Cancer Programs was published.5 Registry requirements included obtaining the complete roster of patients; accessioning cases; construction of a case abstract file; constructing a patient index file; guaranteeing follow-up and constructing a control file; coding, filing, and indexing by diagnosis; and making available clinical data and information as desired. Grant support for these newly required registries and support for establishing and using national standards came from the National Cancer Institute (NCI) and the American Cancer Society.6

National Standards and the American Joint Committee on Cancer

As noted previously, the need to standardize methods for reporting cancer end results was clear from the start. “The lack of uniformity in reporting cancer end results is a source of worry to clinicians and research workers seeking to evaluate therapy from published reports,” according to “A standardized method for reporting cancer end results,” published in the Bulletin in the summer of 1953. To overcome this difficulty, “…the Joint Committee on Reporting Cancer End Results was established in 1950 to recommend rules and forms aimed at eliminating the major causes of confusion.”7

TABLE 2. The Evolution of Cancer Registry Reporting Leading to the NCDB
TABLE 2. The Evolution of Cancer Registry Reporting Leading to the NCDB

By 1953, the Manual for Cancer Programs stated, “Uniform definitions and uniform systems of classification are prerequisites in the advancement of scientific knowledge.” Accordingly, programs were now expected to use, where applicable, the Manual of the International Statistical Classification of Disease, Injuries and Causes of Death, the American Medical Association’s Standard Nomenclature of Disease and Operations, and the American Cancer Society’s publication the Manual of Tumor Nomenclature and Coding.5

In 1957, the ACS and American College of Radiology (ACR) agreed that a U.S. committee should be organized with the intent of developing a system of clinical staging of cancer by sites. The American Joint Committee on Cancer Staging and End Results Reporting was, therefore, implemented in 1959.7 Membership in the newly formed Joint Committee included representation from the ACS, ACR, College of American Pathologists, American College of Physicians, American Cancer Society, NCI, Cancer Control Program, and U.S. Public Health Services (USPHS) Bureau of State Services.

National standard setters and key partners

Long-standing relationships with the American Cancer Society, NCI, and Centers for Disease Control (CDC, now Centers for Disease Control and Prevention) helped drive national standardization efforts. Indeed, the accomplished relationship between the American Cancer Society and the College dates back to their respective origins in 1913 as the American Society for the Control of Cancer and The Cancer Campaign Committee.8 This relationship has been productive for the evolution of staging standards, development of the NCDB, and implementation of key cancer initiatives related to quality of care, screening, prevention, and many others. The practice of adopting international classifications and codes is still key to the success of the CoC and explains its ongoing commitment to the North American Association of Central Cancer Registries. The CoC is grateful for the long-standing partnerships and grant support from the American Cancer Society, NCI, and CDC, as well as past support from the USPHS Bureau of State Services.6,7

From local cancer reporting to the NCDB

The three-decade chain of events leading up to development of the NCDB are outlined in Table 2. Further details follow.

From local hospital summaries to analytic caseloads

By 1953, standardized templates were available to accredited hospitals to ensure that “an adequate Cancer Registry is the active repository of abstracts or records of the total cancer experience within an institution from a given date, from which data pertaining to cancer cases can be obtained, and upon which continuing evaluation studies may be based.”5 Although these first hospital registries were paper-based and often handwritten, they were standardized in their definitions, such that data collected on cancer patients in New York, NY, would be comparable with data entered about patients in Toledo, OH. The stacks of paper could be added together into a single stack, representing a single data set.

As patient care evaluation studies were introduced in the 1970s, it became clear that greater access and ease in central data collection and analysis were needed. The CoC and ACS investigated and stimulated the advent of computerized registry options, including the development and distribution of CanSUR®/Net software.4 Although only 25 percent of approved hospitals used a computerized cancer registry, that number increased to 80 percent over the ensuing decade. National patient care studies also were aided by the introduction of the Data Acquisition Manual (DAM), which offered a comprehensive set of data definitions and coding instructions, ranging from “Worrisome, which was added to the list of terms not to be interpreted as involvement” to full staging definitions, including those of NCI—Surveillance and Epidemiology and End Results (SEER) and the AJCC.

Table 3: Evidence that Quality Measures Influence the Adoption of Best Practices
Table 3: Evidence that Quality Measures Influence the Adoption of Best Practices

Full analytic caseloads were sent on disk by mail to the NCDB so these standardized data sets could be concatenated together and analyzed. This was the first time a need arose for a call for data, and the annual NCDB reports sprung forth as a result. Much of this paradigm is still intact today because of these initial constraints. Data are abstracted at the hospital registry, and although electronic health records (EHRs) and registry vendor software are used, most of the data abstraction is still a manual process from one system to another.

Modern-day NCDB functions evolve

When the NCDB receives data, a tremendous amount of quality assurance, data transformation, and analysis occurs, not including any of the subsequent computational work for quality measures and risk-adjusted metrics, which can range from ensuring the dates are all in identical formats to checking that codes are valid. The NCDB rejects cases submitted that do not pass these rigorous quality checks. And although these logical checks are performed by processors, the logic is managed over time by hand, resulting in thousands of lines of code.

Since these first days of compiling small datasets into larger datasets, accessible by something like a spreadsheet, technology has evolved to present new opportunities. With the national rollout of EHRs, as well as the World Wide Web now fitting into any person’s pocket, the problem of providing real-time quality improvement data to clinicians has shifted to new possibilities. Data need to be as quick as they are complete. Waiting for two years after a patient is diagnosed to complete the cancer record is no longer the only option.

The Rapid Cancer Reporting System (RCRS)

The RCRS was rolled out in September of 2020 as part of the College’s Quality Data Platform. This system was designed to create both an overall better way to access NCDB data for CoC hospitals, as well as to set the stage for concurrent abstracting. Concurrent abstracting is the registry’s method of submitting the same cancer cases multiple times as the different data across the history of diagnosed disease are completed. A staged case may be enough to establish an initial case cohort for a disease site and time period and can initiate inclusion of that patient in a quality measure, including care alerts to remind programs of the remaining time left to satisfy the quality measure. The NCDB first published five quality measures in 2006 using data from the annual call for data.

By 2014, it was clear that the availability of data and quality measures was driving high levels of compliance with best cancer practices (see Table 3). The Quality Integration Committee and NCDB staff continue to develop and deliver high-value quality measures to move best practices forward.

Cancer care continues to evolve, as does technology. The need to stay current with clinical practice is nothing new, but the degree of integration of technology and medicine as well as the pace of change continues to heighten. Data definitions that have been long standardized at the registry level are being pushed into clinical interfaces so that data enters these systems in computer-readable formats such that portions of the case may be automatically abstracted in registries, registrars can use better tools for semi-automated data abstraction, and the value of data-driven information can be turned around in real time to clinicians and patients in the hospital setting.

The value of NCDB

Facts about NCDB

The NCDB, initiated in 1985 and fully implemented in 1989, had accepted more than 31 million analytic cases by 2015, with roughly 1.5 million new cases reported annually, representing 72 percent of U.S. cancer patients.9 A total of 1,500 CoC-accredited programs contribute cases each year, and 200 to 300 data fields are collected for each case, inclusive of 75 different cancer sites. The NCDB has been a rich repository of information for expanding our knowledge about specific cancers, patterns of care, trends over time, quality of care, hospital performance, and more.

The value of NCDB as a resource for CoC-accredited programs

Six NCDB reports are generated and shared with accredited programs routinely, including the NCDB Hospital Comparison Benchmark Report; NCDB Survival Reports; Site by Stage Distribution Report; NCDB Completeness Reports; Rapid Cancer Reporting System; and Cancer Quality Improvement Program. Each of the six NCDB reports serves a unique purpose; most are adjusted based on stage and most are designed to help accredited programs understand their short- and long-term cancer care performance. General utilities for contemporary NCDB data and reports include:

  • Tracking and analyzing all types of cancers for trends, benchmarks, and outcomes
  • Community assessment for the identification of potential gaps and areas in need of outreach
  • Quality improvement for identifying and closing performance gaps
  • Cancer facility administrative reports for understanding the practice, growth areas, patterns of care, and allocation of resources.

Accredited programs find many other local uses for NCDB data, including medical staff committee reports, practice management, identification of gaps in cancer services, and as part of research projects, grant submissions, and trainee education.

Participant User File

In 2010, CoC investigators considered it time to leverage the millions of cancer patient records housed in the NCDB, and so the Participant User File (PUF) program was initiated by seven disease site teams who reviewed de-identified case-level data that had been encrypted and distributed by compact disc. The first two years of piloting the PUF program resulted in 47 abstracts and 11 manuscripts accepted by high-quality journals. By the time PUF was launched to all CoC investigators in 2013, fewer than 100 total publications used NCDB data; as of 2021, more than 1,300 publications have cited the NCDB PUF program.

The PUF program continues to receive nearly 1,000 applications for data every year from CoC investigators. The uses of the NCDB PUF data have ranged from the straightforward characterization of cancer demographics and reporting of cancer outcomes in specific cancers, to the call for its use to develop clinical trial accrual targets for minority and ethnicity enrollment. It is gratifying that PUF data have been used to identify gaps and critical variations in practice that are then addressed by quality improvement efforts. Further, it is gratifying that it has helped to shed light on important disparities in cancer treatment and outcomes that need to be addressed.

The next chapter of NCDB

While standardizing and computerizing five-year end results has served the CoC and cancer community well for 100 years, the availability of real-time and actionable data, afforded by the transition to the RCRS data platform, advances new opportunities for improving the quality of cancer care. Another key step in this journey will be to standardize the information that comes from health care professionals at the point of care. Structured and standardized synoptic medical reports have the potential to improve the quality and immediacy of patient data. The entering of structured, standardized information as patients are proceeding from diagnosis to treatment will help programs monitor patient alerts, measure and correct practice performance, and use quality data in new ways and for new purposes. We have come to understand from Peter Drucker and the world of business: “…what gets measured gets improved.” The future is bright for the many and diverse opportunities to impact cancer care through the continuous assessment and quality improvement using data from RCRS and the NCDB.

References

  1. Kelvin WT. “Electrical Units of Measurement.” In: Popular Lectures and Addresses. London: Macmillan and Company, 1891.
  2. Committee on the Treatment of Malignant Diseases. Organization of Service for the Diagnosis and Treatment of Cancer. Bull Am Coll Surg. 1931;15(2):22-25.
  3. Codman EA. The registry of bone sarcomas as an example of the end-result idea in hospital organization. Bull Am Coll Surg. 1924;8(1):34-38.
  4. Brennan MF, Clive RE, Winchester DP. The CoC: Its roots and destiny. Bull Am Coll Surg. 1994;79(6):14-21.
  5. American College of Surgeons. Manual for Cancer Programs. Chicago, IL: American College of Surgeons. 1953;2-37.
  6. American College of Surgeons. Cancer registry is ACS requirement. Bull Am Coll Surg. 1956;41(1):36.
  7. American College of Surgeons. A standardized method for reporting cancer end results. Bull Am Coll Surg. 1953;38(4):149-153.
  8. McGinnis LS. Common origins: The two ACSs—100 years of collaboration to improve the lives of cancer patients. Bull Am Coll Surg. 2012;97(4):7-15.
  9. Mallin K, Browner A, Palis B, et al. Incident cases captured in the National Cancer Database compared with those in the U.S. population based central cancer registries in 2012–2014. Ann Surg Oncol. 2019;26(6):1604-1612.