Advertisement

Using EyeArt and AI to Detect Diabetic Retinopathy

Published on: 

Two colleagues from the Lewis Katz School of Medicine discuss the promise of AI to detect diabetic retinopathy.

Jeffrey Henderer, MD

Artificial intelligence (AI) software has the opportunity to decrease the time spent by clinicians on image interpretation, provide point-of-care results to the patient, and expand the scope of diabetic retinopathy screening. Technology, such as Eyenuk EyeArt, has a chance to prove which patients need additional screening and to be treated.

In the past, research by a team of investigators had found that EyeArt had the ability to accurately predict diabetic retinopathy 95.5% of the time. The findings emphasized that the AI software was accurate without the input of an ophthalmologist and could perform the task in less than 1 minute.

However, challenges still persist using the software, especially if the images are blurry or ungradable.

HCPLive® spoke with Jeffrey Henderer, MD, and Nikita Mokhashi, BS, both of Temple University’s Lewis Katz School of Medicine, to learn more about how EyeArt is being used, the promise of AI to detect diabetic retinopathy, and the findings of a recent study that was set to be presented at the 2020 Association for Research in Vision and Ophthalmology (ARVO) annual meeting.

Editor’s note: This interview has been lightly edited for style and clarity.

HCPLive: Can you explain your study “A comparison of artificial intelligence and human diabetic retinal image interpretation in an urban health system,” and its clinical importance?

Mokhashi: We gathered photos from three months, retrospectively from the camera, and I placed them in through EyeArt and then got either a referable or a non-referable result based off of whether they had a certain ICDR score for diabetic retinopathy. Then I put that all into a spreadsheet before looking at the clinical assignment.

After I assigned all of those, I went back to the optometrist reading for all of those cases in their charts and then put in their ICDR score and then compared how similar or dissimilar they were. The purpose of that was to see how the optometrist compared to the EyeArt technology.

There were a few things that we encountered. One was that some of our photos were not as high quality as we wanted. So that was a good quality improvement for us. And then another thing was that in general, certain photos were ungradable and we were trying to lower our percentage of ungradable photos to see the accuracy of the technology.

HCPLive: So, what was the big finding for practicing optometrists?

Henderer: I'm not sure that we would look at this as an optometry kind of thing. This is an attempt to improve the quality of care in a health system. And it just so happens that we have optometrists that are interpreting the photos on a regular basis and have been for the past 4 years. We were, as a result, using them as sort of the de facto gold standard. But that doesn't mean that it's specific to optometry. It just happens that that's who we have looking at the photos normally.

I would say that we are trying to figure out if it's possible to screen more people more quickly, using automated technology, rather than relying on a human to interpret because the optometrists are seeing patients as well, and so this is sort of the side gig they do to interpret these photos.

HCPLive: What other specialties might be able to benefit from using this technology?

Henderer: The goal of course is to screen diabetics. Diabetics live primarily in internal medicine and endocrinology. In terms of primary care providers, in theory, they should also be coming to ophthalmology or optometry for annual eye exams. Diabetics frequently do not get annual eye exams. So, we're trying to reach out to the places where diabetics go, which is their primary care provider, to screen and essentially do an eye exam while they're at the primary care provider.

To do that, you need a photograph, and someone has to interpret the photograph. That benefits 2 people. The first is, of course, the patient. The second is hopefully the system, because the primary care providers will be able to claim that they are doing more preventive health maintenance, which means that hopefully their HEDIS scores will increase and they'll be able to get bonus payments from insurance providers for taking better care of patients. Then, of course, the ophthalmologists will hopefully be able to see these patients who have problems, and that will allow us to better take care of those patients.

HCPLive: Would you say that’s the ultimate promise of using AI for diabetic retinopathy?

Henderer: Well, the ultimate promise of course, is to prevent unnecessary blindness. The only way to prevent unnecessary blindness is to first identify the people who need treatment, and second, get them into treatment. The problem has traditionally been the latter. We can certainly set up screenings remotely, but sometimes those screenings don't pan out as much as you would think when it comes to getting people into treatment, because of either insurance barriers or patient travel barriers. So, we believe that by doing this within a health system, where we all are caring for the patients, and we all say we take their insurance because it's the same group practice, hopefully we will be able to close that loop and be able to take better care of the patients.

HCPLive: What challenges have presented with using this software? Nikita mentioned that some of the photos were ungradable.

Mokhashi: When you input photos into EyeArt, at least when I did it last summer—I think some of the technology has changed a little since then—but when I was doing it last summer, when you need to put in 2 photos for each eye per person. And if any 1 of those photos is ungradable, the entire patient gets counted as ungradable. So that was 1 of the challenges that we didn't want to influence our data, because it doesn't necessarily mean that EyeArt can't grade the photos that are gradable, it's just how it's set up. If one photo is ungradable, the entire patient gets counted. So, we excluded those just to be able to separate that confounding variable. That's potentially 1 of the challenges that we experienced with the technology. But I think that may have changed.

HCPLive: Is an ungradable photo just a blurry image?

Mokhashi: Yes, the photo quality is not as good. I think they need a certain number of quadrants to be visible and that I think is good feedback for anybody who's reading the photo separately because sometimes with the photo, I would go and look at it and sometimes they just really are not good photos. It’s just a good way for people to get instant feedback that their photo is not as high quality as it needs to be.

HCPLive: Is this something that could ultimately replace a human expert?

Henderer: Yes, the point of artificial intelligence reads is that it will replace the need for a human to do the interpretation.

HCPLive: Do you think that experts would be less inclined to use it because it's taking away something that they do or does this just make it easier for their day to day?

Henderer: That's an excellent question and everybody wonders if the computer will replace their job. And I think that the answer to that is partially yes and partially no. So, if you are looking to continue to have a human interpret the photographs that will go away, but the reality is somebody still has to see the patients at face. If the patient screens positive, whether they were interpreted by a computer or a human—it turns out that at least in ophthalmology, we do not want to see normal patients, we have no interest in seeing normal patients, we only want to see patients who actually need our help. So, if we can screen out all the normal patients, and only see those patients who actually need our help, that's actually a much more efficient way to deliver care and a better use of our time.

It also turns out the reimbursement for reading these photographs is so low, that it's very difficult to assign anybody to the job because it just doesn't pay enough to make it worth their time. So, it's a difficult situation to try and reconcile because the reimbursements are low, and the time required is somewhat intensive. But we do hope that we can screen out the “normals” to be able to just focus in on those who need our help.

HCPLive: Is there anything else that we should know about EyeArt and how it's being used right now to identify diabetic retinopathy?

Henderer: There's sort of 2 aspects. The first is there have been a couple of clinical trials that have been released. One looks at a bank of about 1700 patients, and shows very high, upper 90s sensitivity and specificity for identifying disease and excluding those without disease. And I think if you're looking to set up a screening program, the key part of any screening program is to identify disease. Otherwise, there's no point to screen if you can't identify the disease. So I think if you have a balance between sensitivity and specificity—sensitivity being identifying those who have the disease and specificity meaning making sure those who don't have the disease aren't identified as having disease—we're probably, at least my opinion is you should err on the side of identifying disease and over referring patients, because after all, it is a screening program. And the point of a screening program is to try and identify disease.

So, setting the bar so that you don't over refer does to some degree hamper your ability to identify disease. You can't have it both ways you either have to have the sensitivity be high at the expense of specificity or you have to have the specificity high at the expense of sensitivity. My bias is to have the sensitivity to be high and live with the over referrals of patients if need be. I think that EyeArt has actually demonstrated extremely high sensitivities and sometimes not great specificities—at least in our study. But as a general theme, it's had very high sensitivity. So that to me is exactly what you want to see in a screening program.

Now, EyeArt has not yet gotten FDA approval. So, as a result, EyeArt by itself is not able to be done. So, the current iteration of EyeArt is called EyeScreen, which is a combination of artificial intelligence and the human read occurring somewhat simultaneously, in order to provide the feedback to the patient and the primary care doctor.

When we implement this in our health system, which we're in the process of doing if it weren't for this pandemic, the idea is that we'll be providing the artificial intelligence read and the human read simultaneously to the primary care provider. They won't get the exact point of care feedback like they would if it was just EyeArt by itself, because we're still going to have to get the optometrists read before the patient result is released to the patient into the PCP. But that's the way it's going to be until the EyeArt gets FDA approval, then it could be done independently and run at the point of care like it's intended to be.


Advertisement
Advertisement