OR WAIT null SECS
A clinician performing an echocardiogram of a patient's heart
Cardiologists use echocardiography to diagnose a range of functional or structural abnormalities of the heart. Using often over 100 videos and images that capture different parts of the heart, echocardiographers make dozens of measurements, such as the heart's size and shape, ventricle thickness, and the movement and function of each heart chamber, to assess patient heart health.
A new study in JAMA led by Yale School of Medicine (YSM) researchers finds that an artificial intelligence (AI)-enabled tool can interpret echocardiograms with a high degree of accuracy in just a few minutes.
“Echocardiography is a cornerstone of cardiovascular care, but it requires a tremendous amount of clinical time from highly skilled readers to review these studies,” says Rohan Khera, MD, MS, assistant professor of medicine (cardiovascular medicine) at YSM and of biostatistics (health informatics) at Yale School of Public Health. Khera is the paper's senior author and director of the Cardiovascular Data Science Lab (CarDS). “We wanted to develop a technology that can assist these very busy echocardiographers to help improve accuracy and accelerate their workflow.”
The researchers found the AI tool, PanEcho, could perform 39 diagnostic tasks based on multi-view echocardiography and accurately detect conditions such as severe aortic stenosis, systolic dysfunction, and left ventricle ejection fraction, among others. This study builds on previous publications, including a 2023 publication in the European Heart Journal, that demonstrated the technology’s accuracy.
Greg Holste, MSE, a PhD student at the University of Texas Austin who is co-advised by Khera and is co-first author of the study, says, “We developed a tool that integrates information from many views of the heart to automatically identify the key measurements and abnormalities that a cardiologist would include in a complete report.”
PanEcho was developed using 999,727 echocardiographic videos collected from Yale New Haven Health patients between January 2016 and June 2022. Researchers then validated the tool using studies from 5,130 Yale New Haven Health patients as well as three external data cohorts from the Heart and Vascular Center of Semmelweis University in Budapest, Hungary; Stanford University Hospital; and Stanford Health Care.
“The tool can now measure and assess a wide range of heart conditions, making it much more attractive for future clinical use,” says Evangelos K. Oikonomou, MD, DPhil, clinical fellow (cardiovascular medicine) and co-first author of the study. “While it is highly accurate, it can be less interpretable than the read from a clinician. It’s still an algorithm and it requires human oversight.”
While PanEcho is not yet available for clinical use, the paper discusses several potential future clinical applications of the technology. For instance, echocardiographers could utilize the tool as a preliminary reader to help assess images and videos in the echocardiography lab. It could also serve as a second set of eyes to help identify potentially missed abnormalities in existing databases.
The researchers also note that this technology could be particularly valuable in low-resource settings, where access to equipment and skilled echocardiographers is limited. In these environments, clinicians often rely on handheld, point-of-care ultrasound devices, which produce lower-quality imaging that can be more challenging to interpret.
To validate the model’s accuracy with point-of-care ultrasounds, the researchers used imaging from the Yale New Haven Hospital emergency department, which performs point-of-care ultrasounds as part of routine care.
“We replicated the experience of low-resource settings across the world, where clinicians typically use a handheld ultrasound and wait for those images to be interpreted by a cardiologist elsewhere,” says Khera. “Even with lower-quality images, our model was very resilient and acquired the information needed to make a highly accurate determination.”
Khera and his colleagues are now working to conduct studies to assess how using the tool might change patient care in the echocardiography laboratory at Yale.
“We are learning much more about how clinicians use the tool in a real-world setting, including modifications to their workflow, their responses to the information, and the value, if any, that this tool adds in a clinical context,” says Khera.
“AI tools like the one validated in this study have the potential to help us increase our efficiency and accuracy, ultimately allowing us to screen and treat a larger number of patients with cardiovascular conditions,” says Eric J. Velazquez, MD, Robert W. Berliner Professor of Medicine (cardiovascular medicine) and chief of Yale Cardiovascular Medicine. “I’m proud of Yale’s continued commitment to investing in cutting-edge research to help us innovate new ways to deliver care.”
The full model and weights are available via open source, and the research team is encouraging other investigators to test the model using their echocardiographic studies and make improvements.
Additional study authors include Zhangyang Wang, PhD, at the University of Texas at Austin, and Márton Tokodi MD, PhD, and Attila Kovács, MD, PhD, both of Semmelweis University.
The research reported in this news article was supported by the National Institutes of Health (awards R01HL167858, K23HL153775, R01AG089981, and F32HL170592), Yale University, and other funding sources. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Related Content: