Advertisement

Mitigating AI Legal Risks in Sleep Medicine, With Ramesh Sachdeva, MD, PhD

Published on: 

At SLEEP 2025, Sachdeva outlined evolving liability concerns, transparency issues, and steps clinicians can take to safely integrate AI in practice.

Editor's Note: Sachdeva's opinions are his own and do not reflect the opinions of the AASM. This should not be viewed as legal advice.

Ramesh Sachdeva, MD, PhD, chief of the division of pediatric critical care medicine at Children’s Hospital of Michigan, presented on legal issues of artificial intelligence (AI) in sleep medicine at SLEEP 2025. Issues stretch from who holds responsibility for AI errors and the risk of medical negligence.

HCPLive spoke with Sachdeva at the meeting about AI legal concerns. He addressed legal challenges clinicians face in interpreting patient-generated data, current guardrails for AI models, and practical steps sleep medicine practices can take to mitigate legal risk while responsibly adopting emerging technologies like AI.

Sachdeva said clinicians should be cognizant of what the AI data is saying and use it to augment clinical decision making.

“Interestingly, on our panel this morning, one of our panelists represented a patient…and the point he made was specifically that the expectation is that clinicians are [aware of] these device outputs,” Sachdeva said. “From a legal liability standpoint, the risks that you have to be aware of relate to issues of privacy…consent, who owns these data, and also to keep in mind this issue, the growing concern of cybersecurity, because these are large volumes of data potentially being stored in areas where the clinician may not have a direct control.”

Sachdeva highlighted the importance of transparency when it comes to using AI in sleep medicine. Guardrails for AI are still evolving, with more movement in Europe. The European Union has developed formal legislative processes like the AI act to provide regulation. Sachdeva said national bodies, groups, and clinicians all have a role to ensure the transparency of AI algorithms.

With the rise in AI, clinicians should first learn what policies are in place and comply with them. They should then learn the potential legal risks.

“The third and final point I would make is that the adoption of AI can dramatically improve our efficiencies…effectiveness and improve quality of care to patients who we take care of in the sleep medicine area and beyond,” Sachdeva said. “Therefore, it's important to not be afraid of AI, but approach it in a thoughtful manner and in a responsible manner, so that we can get the best of AI at the same time, have the requisite precautions and safeguards for our practices, institutions, and patients as we adopt AI into the future.”

Relevant disclosures for Sachdeva include Amarin Pharma Inc.

References

Sachdeva R, Goldstein C, Horsnell M, et al. Legal Issues and the Practice of Sleep Medicine: Artificial Intelligence, Machine Learning, & Emerging Technologies. Presented at SLEEP 2025, the 39th annual meeting of the Associated Professional Sleep Societies, on Tuesday, June 10 in Seattle.



Advertisement
Advertisement