A Versatile AI System for Analyzing Series of Medical Images

brain scans

A new AI-based system for analyzing images taken over time can accurately detect changes and predict outcomes, according to a study led by investigators at Weill Cornell Medicine, Cornell’s Ithaca campus and Cornell Tech. The system’s sensitivity and flexibility could make it useful across a wide range of medical and scientific applications.

The new system, termed LILAC (Learning-based Inference of Longitudinal imAge Changes), is based on an AI approach called machine learning. In the study, which appears Feb. 20 in the Proceedings of the National Academy of Sciences, the researchers developed the system and demonstrated it on diverse time-series of images—also called “longitudinal” image series—covering developing IVF embryos, healing tissue after wounds and aging brains. The researchers showed that LILAC has a broad ability to identify even very subtle differences between images taken at different times, and to predict related outcome measures such as cognitive scores from brain scans.

Dr. Mert Sabuncu

Dr. Mert Sabuncu

“This new tool will allow us to detect and quantify clinically relevant changes over time in ways that weren't possible before, and its flexibility means that it can be applied off-the-shelf to virtually any longitudinal imaging dataset,” said study senior author Dr. Mert Sabuncu, vice chair of research and a professor of electrical engineering in radiology at Weill Cornell Medicine and professor in the School of Electrical and Computer Engineering at Cornell University’s Ithaca campus and Cornell Tech.

The study’s first author is Dr. Heejong Kim, an instructor of artificial intelligence in radiology at Weill Cornell Medicine and a member of the Sabuncu Laboratory.

Traditional methods for analyzing longitudinal image datasets tend to require extensive customization and pre-processing. For example, researchers studying the brain may take raw brain MRI data and pre-process the image data to focus on just one brain area, also correcting for different view angles, sizing differences and other artifacts in the data—all before performing the main analysis.

The researchers designed LILAC to work much more flexibly, in effect automatically performing such corrections and finding relevant changes.

Dr. Heejong Kim

Dr. Heejong Kim


“This enables LILAC to be useful not just across different imaging contexts but also in situations where you aren’t sure what kind of change to expect,” said Dr. Kim, LILAC’s principal designer.

In one proof-of-concept demonstration, the researchers trained LILAC on hundreds of sequences of microscope images showing in-vitro-fertilized embryos as they develop, and then tested it against new embryo image sequences. LILAC had to determine, for randomized pairs of images from a given sequence, which image was taken earlier—a task that cannot be done reliably unless the image data contain a true “signal” indicating time-related change. LILAC performed this task with about 99% accuracy, the few errors occurring in image pairs with relatively short time intervals.

LILAC also was highly accurate in ordering pairs of images of healing tissue from the same sequences, and in detecting group-level differences in healing rates between untreated tissue and tissue that received an experimental treatment.

Similarly, LILAC predicted the time intervals between MRI images of healthy older adults’ brains, as well as individual cognitive scores from MRIs of patients with mild cognitive impairment—in both cases with much less error compared with baseline methods.

The researchers showed in all these cases that LILAC can be adapted easily to highlight the image features that are most relevant for detecting changes in individuals or differences between groups—which could provide new clinical and even scientific insights.

“We expect this tool to be useful especially in cases where we lack knowledge about the process being studied, and where there is a lot of variability across individuals,” Dr. Sabuncu said.

The researchers now plan to demonstrate LILAC in a real-world setting to predict treatment responses from MRI scans of prostate cancer patients.

The LILAC source code is freely available at https://github.com/heejong-kim/LILAC

Many Weill Cornell Medicine physicians and scientists maintain relationships and collaborate with external organizations to foster scientific innovation and provide expert guidance. The institution makes these disclosures public to ensure transparency. For this information, see profile for Dr. Mert Sabuncu.

Funding for this project was provided in part by grants from the National Cancer Institute and the National Institute on Aging, both part of the National Institutes of Health, through grant numbers K25CA283145, R01AG053949, R01AG064027 and R01AG070988. For aging brain experiments, data were provided by OASIS-3: Longitudinal Multimodal Neuroimaging: Principal Investigators: T. Benzinger, D. Marcus, and J. Morris; NIH P30 AG066444, P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, and R01 EB009352.

Weill Cornell Medicine
Office of External Affairs
Phone: (646) 962-9476