Search

How can health systems ensure their machine learning practices are ethical? - Healthcare IT News

jokbanga.blogspot.com

Machine learning has the potential to completely transform the way healthcare is delivered, but unlocking those new approaches can come with risks.

Ethical questions should be asked in the design and implementation of machine learning models to ensure models are developed to maximize benefit and avoid potential harm. Machine learning relies on access to historical data, often containing personal information, and frequently available in lower quantity and quality than would be ideal.

How does one protect privacy, account for inherent bias, ensure that the right people benefit and explain complex models? These are ethical challenges faced in the development of this capability.

Clinicians are ethical bastions

"Our healthcare providers hold to a strong moral and ethical code," said Kevin G. Ross, CEO of Auckland, New Zealand-based Precision Driven Health, an award-winning, multimillion-dollar public-private research partnership applying data science to enable precision health to become a reality. 

"As some of the most trusted roles in society, clinicians hold a place of honor that both they and their patients rely upon and reinforce through their interactions," said Ross.

"As with any tool that is introduced into patient care, machine learning should be evaluated on the benefits and risks to patient and provider," he said. "Ethics describes our value system and machine learning means using computational power to build models and make decisions on our behalf. As gatekeepers for patient care decisions, clinicians will not adopt or recommend machine learning unless it aligns with their values and builds upon their trusted foundation."

What makes machine learning particularly challenging is the evolutionary nature of algorithms, Ross noted. Whereas a new device or drug can usually be evaluated in a relatively well-established path of clinical trials, a machine learning algorithm may perform quite differently today from yesterday, and give quite different results for different people and contexts, said Ross.

"When we allow machine learning to contribute to decision-making, we are introducing an element of real-time research that doesn't easily replicate the rigor of our traditional research evaluation studies," he explained. "Therefore we must, from the very conceptual design stage, think about the ethical implications of our new technologies."

Stopping to think things through

The most important processes involve thinking through what could happen when a model is deployed, with people from a range of perspectives. It's very easy to get lost in the science of building great models and completely miss both opportunities and risks that the models create, Ross said.

"Two of the most important processes are a traditional peer review, where someone who understands the data science looks closely at the model and its assumptions, and a risk assessment with the help of a nontechnical person," he said. 

"Asking a consumer, clinician or planner how they expect a model to be used may identify completely unexpected uses. Could a model designed to accelerate care indirectly penalize one group of people? Could requiring additional personal data exclude the intended beneficiaries?

"Documenting what you believe could be the consequence of releasing a model – then monitoring what happens when you do – is an important practice that allows each model to continuously improve through its lifecycle," he added.

Automating current practice

The easiest thing to do with machine learning, Ross explained, is to automate current practice.

"Our techniques are designed and measured on their ability to replicate the past," said Ross. "But what if the past isn't ideal? Are we more efficiently making poor decisions? What happens when a model encounters a new combination? People intuitively learn and relate an unusual or new case to what they do or can know already.

"Machines could do the same, or they could make assertions without sufficient relevant information," he added. "This means by nature that minorities, who are generally poorly represented in past data and experience poorer outcomes, will almost certainly benefit less from machine learning, and may experience more harm. Our modelling techniques and processes must be designed to handle these challenges and constantly improve on the past."

Ross will offer more detail during his HIMSS21 session, Ethical Machine Learning. It's scheduled for August 10, from 11:30 a.m. to 12:30 p.m. in Venetian San Polo 3404.

Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Adblock test (Why?)



"machine" - Google News
June 21, 2021 at 11:10PM
https://ift.tt/2Uj66kN

How can health systems ensure their machine learning practices are ethical? - Healthcare IT News
"machine" - Google News
https://ift.tt/2VUJ7uS
https://ift.tt/2SvsFPt

Bagikan Berita Ini

0 Response to "How can health systems ensure their machine learning practices are ethical? - Healthcare IT News"

Post a Comment

Powered by Blogger.