A Deep-learning Model May Help Predict Lung Cancer Survival and Outcomes
PHILADELPHIA: – A deep-learning model developed using serial image scans of tumors from patients with non-small cell lung cancer (NSCLC) predicted treatment response and survival outcomes better than standard clinical parameters, according to results published in Clinical Cancer Research, a journal of the American Association for Cancer Research.
“Our research demonstrates that deep-learning models integrating routine imaging scans obtained at multiple time points can improve predictions of survival and cancer-specific outcomes for lung cancer,” said Hugo Aerts, PhD, director of the Computational and Bioinformatics Laboratory at the Dana-Farber Cancer Institute and Brigham and Women’s Hospital, and an associate professor at Harvard University. “By comparison, a standard clinical model relying on stage, gender, age, tumor grade, performance, smoking status, and tumor size could not reliably predict two-year survival or treatment response.”
Lung cancer is the most common cancer and the leading cause of cancer death worldwide. NSCLC accounts for about 85 percent of all lung cancers. The standard assessment for diagnosis and response to therapy for these patients relies heavily on the measurement of maximum tumor diameter, which is susceptible to variations in interpretation between observers and over time.
To see if they could extract more predictive insights as cancers evolve, Aerts and colleagues built deep-learning models. They transferred learning from ImageNet, a neural network created by researchers at Princeton University and Stanford University that identifies a wide range of ordinary objects from the most relevant features, and trained their models using serial CT scans of 179 patients with stage 3 NSCLC who had been treated with chemoradiation. They included up to four images per patient obtained routinely before treatment and at one, three, and six months after treatment for a total of 581 images.
The investigators analyzed the model’s ability to make significant cancer outcome predictions with two datasets: the training dataset of 581 images and an independent validation dataset of 178 images from 89 patients with non-small cell lung cancer who had been treated with chemoradiation and surgery.
The models’ performance improved with the addition of each follow-up scan. The area under the curve, a meaure of the model’s accuracy, for predicting two-year survival based on pretreatment scans alone was 0.58, which improved significantly to 0.74 after adding all available follow-up scans. Patients classed as having low risk for mortality by the model had six-fold improved overall survival compared with those classed as having high risk.
Compared with the clinical model that utilizes parameters of stage, gender, age, tumor grade, performance, smoking status, and clinical tumor size, the deep-learning model was more efficient in predicting distant metastasis, progression, and local regional recurrence.
“Radiology scans are captured routinely from lung cancer patients during follow-up examinations and are already digitized data forms, making them ideal for artificial intelligence applications,” said Aerts. “Deep-learning models that quantitatively track changes in lesions over time may help clinicians tailor treatment plans for individual patients and help stratify patients into different risk groups for clinical trials.”
The main limitation of this proof-of-principle research is that it needs to be expanded with more data and evaluated in prospective clinical trials, said Aerts.
The study was supported by grants from the National Institutes of Health. Aerts reports shares from Genospace and Sphera, unrelated to this research.