Lily A. Brown
Most clinicians reported that suicide risk flags in electronic medical records influenced their clinical decision-making, with their decision tied significantly to which features were highlighted rather than the risk flag presence alone, according to study results published in Journal of Clinical Psychiatry.
“There is a great deal of enthusiasm for using machine learning to detect patients who are at high risk for suicide,” Lily A. Brown, PhD, of the department of psychiatry at University of Pennsylvania, told Healio Psychiatry. “Many health care systems are working to incorporate these algorithms directly into the electronic health record. However, the recommendations that stem from these algorithms are only helpful if clinicians interpret the recommendations correctly. This is the first study of its kind to explore how clinicians report that they would interpret suicide risk flag recommendations from machine learning algorithms.”
According to Brown and colleagues, multiple critical features of machine learning algorithms affect their interpretations:
- Algorithms provide a computation rather than a model.
- Because they are not models, they do not allow for causal inferences.
- Features that drive classification into a high-risk group may have no causal relation to suicide risk.
Thus, machine learning algorithm features with the highest relative influence on classification may not provide appropriate clinical intervention targets, they noted.
Brown and colleagues aimed to evaluate perceptions of suicide risk flags among 139 mental health clinicians who completed online surveys.
Results showed over 94% of participants preferred to know which features resulted in a patient’s receipt of a suicide flag, and over 88% reported that knowledge of those features would influence their treatment. The researchers also observed that certain algorithm features, such as increased thoughts of suicide, were more likely to alter clinical decisions than others, such as age physical health conditions (P < .001). Further, clinicians reported that they were more likely to respond to a suicide risk flag with a safety/crisis response plan than other interventions (P < .001), and 21% reported that following a suicide risk flag, they would complete a no-suicide contract.
“These findings suggest that clinicians really want to understand ‘what's in the black box’ of the algorithm,” Brown told Healio Psychiatry. “If a clinician receives a suicide risk flag for a patient but does not understand why their patient was deemed at risk, these findings suggest that the clinician might ignore the flag altogether. In addition, clinicians overwhelmingly preferred that the algorithms generating these risk flags are programmed to not miss anyone, which is not altogether surprising. However, clinicians generally had a preference for an algorithm that is extremely sensitive, meaning that there would be a lot of false positives. An overly sensitive algorithm will lead to an abundance of risk flags, which might become ignored over time.” – by Joe Gramigna
Disclosures: The authors report no relevant financial disclosures.
"machine" - Google News
May 08, 2020 at 09:06PM
https://ift.tt/2Ldddne
Features of suicide risk flags detected via machine learning greatly influence clinical decision-making - Healio
"machine" - Google News
https://ift.tt/2VUJ7uS
https://ift.tt/2SvsFPt
Bagikan Berita Ini
0 Response to "Features of suicide risk flags detected via machine learning greatly influence clinical decision-making - Healio"
Post a Comment