OR WAIT null SECS
A new Yale study investigates how artificial intelligence might help identify patients at risk of developing agitation in the emergency department.
Emergency departments (EDs) in the U.S. have seen a surge in patient visits for mental health conditions over the past decade. And frequently these visits include episodes of patient agitation, or aggressive, sometimes violent, behavior.
Once agitation occurs, clinicians work to quickly diagnose potential causes and intervene before it escalates. In the ED, that may mean having to use physical restraints or intramuscular medications to care for an agitated patient, which are known to increase risk of injury, including blunt chest trauma, respiratory depression, and even sudden death.
It can be difficult predicting if and when agitation might happen. But in a new study, Yale researchers describe a potential solution for doing just that. Using artificial intelligence (AI), they created a prediction model for identifying patients at risk of becoming agitated, an advance they say could make the ED safer for patients and providers alike.
“In general, patients that experience psychiatric emergencies or behavioral emergencies first land in our doors,” said Ambrose Wong, MD, MSEd, MHS, associate professor of emergency medicine at Yale School of Medicine (YSM) and lead author of the study published in the journal JAMA Network Open.
Prediction models are becoming more common within modern medicine. They’re being used to assist the decision-making of clinicians within cardiology, diabetes care, chronic kidney disease, and more. But the development of prediction models for mental health has been limited.
For this study, researchers sought to address this gap by developing, training, and validating a new modeling tool using common information collected in the ED to identify which patients might become agitated. This information was based on electronic health record data from more than 3 million ED visits by patients 18 or older at nine hospitals in the northeastern U.S. between 2015 and 2022. The researchers also drew from past research, input from hospital staff, and studies on restraint and sedation use.
The research team looked for patterns in this wide range of information (nearly 700 possible factors in total). But before building their model, they carefully cleaned and organized the data — removing poor-quality entries and merging similar data points — to ensure that the information was consistent and ready for analysis. Ultimately, they narrowed their focus to the most relevant factors, testing combinations of the top 20, 50, 100, and 200 predictors.
In developing the model, the team split the data into 2 parts: 1 to train the model and one to test how well it worked. They tested 3 machine learning methods to find the best fit and to ensure the model wouldn’t just memorize patterns from the training data but could reliably make predictions based on new cases. They also checked for fairness across age, sex, and race to avoid unintended bias.
The final model used 50 key features, including a patient’s age, insurance status, past medical history, medications, reason for visiting the ED, and whether they had a primary care provider. Although race and ethnicity were included in the dataset to check for fairness, that information was intentionally excluded from the model to prevent bias. Some of the strongest warning signs for future agitation, they found, included frequent past visits to the ED, abnormal vital signs, relevant medical history, and any previous use of restraints or sedation.
“What we don’t want is for the model to be self-fulfilling but instead predict agitation early in a visit before symptoms even develop,” Wong said. “Our ultimate goal is to better allocate critical and limited resources to those that are in most need or that have the greatest benefit.”
In addition to showing that machine learning can be a powerful tool for predicting patient agitation in the ED, the study could help improve both patient and provider safety by focusing on prevention rather than reactive treatment like sedation or restraints, the researchers say.
“On the patient side of things, physical restraint or chemical restraint can be a pretty dehumanizing experience,” said Andrew Taylor, associate professor adjunct of biomedical informatics and data science at YSM and senior author of the study. “On the staffing side, when people become violent or aggressive, there can be both emotional trauma and physical trauma to staff, too. Anything to decrease that [restraint] is both patient-centered and provider-centered.”
Ultimately, researchers say, the study was a test case for the team’s prediction model. Next steps include scaling the model, getting buy-in from clinicians, and implementing it in hospital systems. “We know the model works, but we really have to get it implemented,” Taylor said. “We have to build up that clinical support around it and look at the pragmatics of when to deploy it and how to deploy it so that it best fits with our staff’s workflow.”
Taylor is also professor of emergency medicine at the University of Virginia. Wong is also the director of simulation research at the Yale Center for Healthcare Simulation.
Other Yale authors include Atharva Sapre, Kaicheng Wang, Bidisha Nath, Dhruvil Shah, Anusha Kumar, Isaac Faustino, Riddhi Desai, Yue Hu, Leah Robinson, Can Meng, Guangyu Tong, Edward Melnick, and James Dziura.
This study was supported by grants from the National Institute of Mental Health, the National Institute of Nursing Research, the National Institute on Minority Health and Health Disparities, and the Patient-Centered Outcomes Research Institute.
Related Content: