Radiology's 'Minority Report'
Today's 'normal' radiology scans may contain information predicting the future development of diseases, such as cancer. Like clairvoyant humans or 'precogs' in the film 'Minority Report,' advanced image feature extraction algorithms can now detect changes that are 'invisible' to a radiologist, yet indicative of future health issues for a patient. While the ability to predict disease manifestation offers unprecedented opportunities in personalizing risk assessment and enabling earlier detection, it also raises clinical, administrative, insurance eligibility, population health, liability, and ethical concerns. And with the continuous advancement of AI-based predictive tools, how should the possibility of retrospective image analysis for any patient later becoming ill be managed?
The world of diagnostic imaging is rapidly moving towards the greater adoption of machine-intelligence meant to enhance capabilities and the efficiency of radiology services. Novel algorithms and constantly increasing computing power are making clinical adoption easier, also for high throughput healthcare departments. Along this trend, radiologists are on the forefront of its practical implementation: 87% of all FDA cleared AI healthcare applications in 2022 were related to medical imaging.
We are witnessing a transformation where radiological information, such as images and reports, is being recognized as a wellspring for mining objective and actionable data. A noticeable array of machine learning and artificial intelligence tools is aimed at detecting findings that are invisible to a physician’s eye. This concept isn't entirely new: for decades, various quantitative, parametric imaging exams have required computer assistance to extract meaningful information from image datasets. However, compared to today's dynamic, those tools (such as prostate or breast CAD systems) were more transparent and easier comprehensible to humans, designed with the intent to streamline or expedite processes that radiologists could typically perform on their own.
Now, we may apply radiomics concept, where advanced algorithms and computational techniques extract a wide range of quantitative features from medical images. Currently, the number of discovered meaningful imaging biomarkers is vast and continuously expanding. Machine learning techniques or, even, foundation AI models, enable computers to automatically learn from data, and improve towards performing clinically driven image-based detection tasks. Based on patterns extracted from imaging data or other available patient information computers become capable of not only forecasting treatment response or patient outcomes, but as well, predicting disease from scans that may appear 'normal' to a radiologist.
Moving further along this path presents even greater challenges. As we enter an era where various changes detected by computers are no longer visible to the unequipped human eye, the interpretation of data and its clinical usability increasingly relies on trust in upfront-provided evidence. This evidence cannot always be easily validated. We're having computer tools reaching into raw data, such as k-space in MRI scanners, no longer requiring image reconstruction or applying filters that cater to human visual preferences. The complexity of algorithms is creating a 'black box' effect for users.
Yet, the radiologist remains responsible for interpreting image data. And the first challenge is in tandem with the development of predictive analytics tools: there arises a necessity for simplified visualization of image feature maps to aid the radiologist in comprehending changes. As we expect machines to detect abnormalities, it's now crucial to reconsider what constitutes a 'finding' for a radiologist.
Recommended by LinkedIn
Achieving absolute transparency in computer algorithms might not be feasible. However, any deployment of machine learning and artificial intelligence tools must include substantial and precise evidence of meaningfulness and actionability of findings they are after.
The emergence of AI-based predictive algorithms prompts a potential redefinition of diagnostic errors in radiology. Failing to do so might lead to today's 'normal' scans be questioned by tomorrow’s techniques for accuracy and quality of interpretation, potentially raising liability concerns.
Returning to the analogy with Precrime System first portrayed in 1954 ‘Minority Report’ book – the ability to predict diseases with imaging creates opportunities in personalizing risk assessment, determining frequency and type of follow-ups, ultimately enabling earlier and more precise detection. However, it also brings forth a spectrum of clinical, administrative, insurance eligibility, population health, liability, and ethical concerns that require careful consideration.
References:
President & CMO Ignition | Strategy | Marketing Communications | Tech PR I Bestselling Author of Flat Please | Co-Creator AskEllyn.ai | Speaker | CEO of the Lyndall Project and AskEllyn
1yIt is a something I will always wonder about. I had a “unremarkable” mammogram in 2019 and two years and a month later had multi focal cancer of the left breast with with largest tumour measuring 4.5 cm. Was there something there in that earlier scan? There was nothing that the radiologist or my surgeon could see…
Medical Director and Chief of the WellSpan York Prostate Care Center, Chairman of Imaging and Radiation Oncology 2016-2022 WellSpan/York Hospital PA
1yVery nice assessment of where we are heading. Data even now suggests that AI may see breast cancers 2 years before a Radiologist may "see" them...to your point.
I agree Ilya - this is an exciting time in watching these tools develop!
Chief Surgical Officer, WellStar Health System
1yPredictive deep learning algorithms applied immediately to image generation can “even the playing field” for radiologists, for patients, and for Early Detection of Lung Cancer. This feature makes CT scanners “smart,” not only generating an image but performing risk assessment. By running an algorithm such as Sybil at the time of reading, Radiologists can receive immediate and accurate notification of those at high risk for cancer. This reduces the ambiguity of indeterminate nodules and allows more focused recommendations on follow up. Patients who might not otherwise be aware that they are at risk for cancer can be informed of the need for urgent follow up and early intervention while a cancer is in its earliest curable stages. Due to the limited nature of Natural Language Processing engines and the wide variability across radiology reporting and interpretation, Incidental Nodule programs require huge amounts of human intervention to find cancers. Benign biopsies are more common than they should be. An accurate AI partner in Thoracic Oncology can improve efficiencies and reduce unnecessary testing, creating improved value and patient satisfaction. Industry support is required to perform validation studies and installation.