Advertisement

Legal Gray Areas of AI in Sleep Medicine, With Ramesh Sachdeva, MD, PhD

Published on: 

At SLEEP 2025, Sachdeva discussed the evolving legal implications of AI use in sleep medicine, including clinician liability, informed consent, and responsible integration into care.

Editor's Note: Sachdeva's opinions are his own and does not reflect the opinions of the AASM. This should not be viewed as legal advice.

At SLEEP 2025, the 39th annual meeting of the Associated Professional Sleep Societies in Seattle, Ramesh Sachdeva, MD, PhD, from Children’s Hospital of Michigan, presented on legal issues of artificial intelligence (AI) in sleep medicine.1 HCPLive spoke with Sachdeva at the meeting about the AI legal concerns, who holds responsibility for an error made by an AI-powered diagnostic system, and how sleep specialists should navigate informed consent when AI tools are used in clinical decision-making or patient monitoring.

“The big message for clinicians is that they need to be aware of what's going on in the legal landscape as AI is relatively new,” Sachdeva said. “This is evolving as we speak…the scope of legal issues is very broad, all the way from civil issues to issues of intellectual property to issues of things like vicarious liability.”

AI has weaved itself into the neurology field, entering the specialties of epilepsy, migraine, movement disorders, neuromuscular disorders, and sleep disorders.2 Several auto-scoring AI tools have been developed, as well as various smart watches and rings that provide insightful data about sleep stages, sleep quality, nocturnal oxygenation, and EKG alerting about the possibility of sleep apnea. There are even AI tools that help patients understand their risks of sleep disorders or can assist in the diagnosis of narcolepsy by analyzing polysomnography patterns.

However, if clinicians are using AI tools to make to help form decisions, this may result in medical negligence, Sachdeva said.

“The takeaway point is that clinicians should be aware of where things are going,” he added. “The answers right now from a legal precedent are still developing, so there's no clear-cut regulations for the most part, but we need to be aware of this so that we can adopt AI and use it in a responsible manner to benefit our patients.”

When it comes to who holds responsibility for the AI’s error, Sachdeva said there is no easy answer, and it is case-specific. However, liability could potentially fall on both the developer, the clinician, and the institution, depending on the situation. The responsibility depends on how the AI algorithm is being used, such as independently or augmenting clinical decision-making.

Due to the risks involved, Sachdeva emphasized the importance of using AI thoughtfully, reviewing the recommendations AI makes.

“As physicians, we are reviewing the recommendations coming from these AI applications so that we can use our clinical background and experience to leverage the AI output to provide the best care for our patients,” Sachdeva said.

Relevant disclosures for Sachdeva include Amarin Pharma Inc.

References

  1. Sachdeva R, Goldstein C, Horsnell M, et al. Legal Issues and the Practice of Sleep Medicine: Artificial Intelligence, Machine Learning, & Emerging Technologies. Presented at SLEEP 2025, the 39th annual meeting of the Associated Professional Sleep Societies, on Tuesday, June 10 in Seattle.
  2. Meglio, M. System Integration: How AI Is Weaving Itself into Neurology. HCPLive. December 5, 2024. https://www.hcplive.com/view/system-integration-how-ai-is-weaving-itself-into-neurology. Accessed June 23, 2025



Advertisement
Advertisement