, according to investigators.
The new software, which automatically integrates endoscopy and pathology reports across a variety of practice settings, delivered an ADR on par with manual review, supporting its accuracy and feasibility for real-world usage, reported Todd A. Brenner, MD, of Johns Hopkins Hospital, Baltimore, and colleagues.
“ADR calculation is resource-intensive, often requiring manual collation of endoscopy and pathology data across multiple reporting modalities, making it an impractical tool for frequent quality audits at many centers,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.
Although others have tried to streamline ADR calculation, most efforts have relied upon manual entry of pathology data, while approaches using artificial intelligence tend to be costly and clumsy to implement across different databases, according to the investigators.
“Thus, there is a substantial demand for a novel tool to extract and analyze colonoscopy indicators from text-based reports that provides accurate data extraction in a package that is easily implemented and modified by clinicians,” they wrote.
Dr. Brenner and colleagues developed a web-based platform to meet these goals.
Following colonoscopy, the system gathers procedural and histopathology results, extracts and classifies relevant data, then outputs ADR, along with cecal intubation rate, Boston Bowel Preparation Score (BBPS), and withdrawal time.
The software was evaluated using endoscopy and pathology reports from 3,809 colonoscopies performed at six centers over 3 months. Six months later, the investigators manually reviewed data from a validation cohort of 1384 colonoscopies conducted over a 1-month period.
Comparing the automated versus manual approach revealed high congruity, with an ADR of 45.1% for the automated system vs 44.3% for manual review. The software also correctly identified most ADR-qualifying screening colonoscopies (sensitivity, 0.918; specificity, 1.0).
“The discrepancy between manual and automated ADR calculations was exclusively attributable to missed (i.e., false negative) identification of ADR-qualifying procedures,” the investigators wrote.
Of these 43 mislabeled cases, about half involved pending pathology results or erroneous pathology sample entries, while the remainder were due to spelling and/or syntax issues that stumped the system.
Still, Dr. Brenner and colleagues suggested that additional programming can overcome these kinds of issues and allow for generalizability across institutions. They noted that search terms can be edited to match local practice patterns, while the web-based reporting platform can be customized to deliver desired quality metrics.
The publication includes a screenshot of one such dashboard, including a readout of ADR, a comparison of ADR across sexes, a pie chart of BBPS score distribution, and gauge charts for cecal intubation rate and mean withdrawal time.
“Further development of this Internet-based colonoscopy quality reporting platform will focus on integrating additional metrics, such as adenomas per colonoscopy, as well as novel metrics, such as a size-stratified ADR, location-stratified ADR, or ADR stratified by polyp histology,” the investigators wrote.
They predicted that automating data collection in this way could help determine which metrics provide clinically meaningful insights, potentially expanding the roster of standard performance benchmarks.
“We further intend to study the integration of this platform into colonoscopy quality improvement and transparency programs to better characterize the impact of frequent, on-demand ADR feedback on colonoscopy performance,” Dr. Brenner and colleagues concluded.The investigators disclosed relationships with Olympus, Medtronic, Apollo Endosurgery, and others.
