OR WAIT null SECS
A deep learning model can identify both iRORA and cRORA lesions at a similar sensitivity compared with human graders, according to a new analysis.
A new investigation aimed to evaluate a deep learning algorithm to automatically detect incomplete retinal pigment epithelial and outer retinal atrophy (iRORA) and complete retinal pigment epithelial and outer retinal atrophy (cRORA) in eyes with age-related macular degeneration (AMD).
The results suggest a deep learning model can both accurately and precisely identify both iRORA and cRORA lesions within optical coherence tomography (OCT) B-scan volumes from eyes with nonneovascular AMD.1
“The model can achieve similar sensitivity compared with human graders, which potentially obviates a laborious and time-consuming annotation process and could be developed into a diagnostic screening tool,” wrote the investigative team, led by Srinivas R. Sadda, MD, Doheny Eye Institute, University of California Los Angeles.
Within the retrospective machine learning analysis, a deep learning model was trained to jointly classify the presence of iRORA and cRORA within a given B-scan. Then, the algorithm was evaluated using 2 separate and independent datasets. The OCT B-scan volumes were captured from a total of 71 patients with nonneovascular AMD at the Doheny-UCLA Centers.
In addition, the following 2 external OCT B-scans testing datasets were used: University of Pennsylvania, University of Miami, and Case Western Reserve University and (2) Doheny Image Reading Research Laboratory.
Images were then annotated by an experienced grader for the presence of iRORA and cRORA. Investigators trained a Resnet18 model to classify the annotations for each B-scan using OCT volumes collected at the Doheny-UCLA Eye Centers. This model was applied to 2 testing datasets in order to assess out-of-sample model performance, according to investigators.
Main outcomes for the analysis were model performance, quantified in terms of area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC). The team additionally compared sensitivity, specificity, and positive predict value against additional clinician annotators.
Upon analysis, on an independently collected test set consisting of 1117 volumes from the general population, investigators found the model predicted iRORA and cRORA presence within the entire volume with nearly perfect AUROC performance and AUPRC scores (iRORA, 0.61; 95% confidence interval [CI], 0.45 - 0.82; cRORA, 0.83; 95% CI, 0.68 - 0.95).
On a further independently collected set, consisting of 60 OCT B-scans enriched for iRORA and cRORA lesions, data showed the model performed with AUROC (iRORA, 0.68; 95% CI, 0.54 - 0.81; cRORA, 0.84; 95% CI, 0.75 - 0.94) and AUPRC (iRORA, 0.70; 95% CI, 0.55 - 0.86; cRORA, 0.82; 95% CI, 0.70 - 0.93).
Together, investigators suggest the results showed a deep learning model is both accurate and precise in identifying both iRORA and cRORA lesions within the OCT B-scan volume.