The views expressed by contributors are their own and not the view of The Hill

Dangers of artificial intelligence in medicine

Two of the most significant predictions for the new decade are that AI will become more pervasive, and the U.S. health-care system will need to evolve. AI can augment and improve the health-care system to serve more patients with fewer doctors. 

However, health innovators need to be careful to design a system that enhances doctors’ capabilities, rather than replace them with technology and also to avoid reproducing human biases.

A recent study published in Nature (in collaboration with Google) reports that Google AI detects breast cancer better than human doctors. Babylon Health, the AI-based mobile primary care system implemented in the United Kingdom in 2013, is coming to the U.S

Health-care is an industry in need of AI assistance due to a shortage of doctors and physician burnout. 

Doctors in the U.S. are experiencing a burnout crisis. Nearly 45 percent of physicians report burnout, and the physician suicide rate is twice that of the general population. Research shows physicians experience burnout because of a poorly designed health care system that isn’t intended to protect them or their patients. 

Physician burnout has been linked to increased medical errors, unprofessional behavior, early retirement, depression, and racial bias

In 2019 the Journal of the American Medical Association published a study of 3,392 second-year resident physicians who self-identified as non-Black and found that symptoms of burnout were associated with explicit and implicit racial biases. 

A study from the Mayo clinic reported poorly designed electronic health records as a contributor to physician burnout. 

Another major contributor to burnout is a shortage of physicians compared to the increased number and needs of patients that require care. The Association of American Medical Colleges predicts a shortage of 21,100 and 55,200 primary care physicians by 2032. 

While a possible solution, AI systems can also cause problems. Increased medical error is a real potential consequence of poorly designed AI in medicine. 

Medical error is the third leading cause of death in the U.S., attesting to both the need for improving the system but also the fragility of the system and consequences of poor design. 

Eliminating the empathetic relationship is another potential consequence of poorly designed and integrated AI. Health care is built on a human-human link. 

Humans desire and benefit from the problem-solving that comes from conversations. In clinics with electronic health records, physicians spend about 27 percent of their time on patient care and 52 percent time in the exam room interacting with the patient. 

 

Replacing humans with technology inappropriately could lead to complacency from physicians and reduced engagement from patients.

AI could lead to new inequities and biases. Recent studies have shown that Black people are less likely to get proper treatment for lung cancer and adequate treatment for pain because of false beliefs about differences between Blacks and whites. 

While some may conclude that AI would remove the biases that minorities receive by focusing on objective data, new research identifies inequities in AI systems. 

A study published in Science in 2019 found that an algorithm used in U.S. hospitals systematically discriminated against Black patients by allocating less care to them.

Babylon’s AI-based chatbot sparked concerns as the chatbots’ safety has been reported; Babylon refutes these claims

Many of the disruptive aspects of the AI system are unique to the National Health Service, which assigns patients to practitioners. With new funding and support, Babylon will enter the U.S. market. Health safety advocates need to be available to advocate for the unique needs of patients in the American health-care system. 

For AI systems to work without harm, a greater understanding of what clinicians do and their current biases is needed. The goal of designing systems that preserves what clinicians do best without the risk of complacency is critical. 

Both tech companies and health leadership are primarily comprised of white male staff that may not be trained to think about bias comprehensively. Diversifying the workforce of companies building AI systems and those innovating the health-care system is needed. 

A recent report found that people of color and women are underrepresented in the AI field, as about 80 percent of AI professors are men, and people of color are only a fraction of staff at major tech companies. 

Diversifying the pipeline of researchers is essential; equally important is building inclusive workplaces and communities that allow under-represented minority researchers to thrive. 

Yes, well-designed AI in health-care systems can transform the health and well-being of members of society by allowing healthcare professionals to provide better quality care to more people and restoring balance to the people who dedicate their lives to providing care. 

But the danger is that AI systems that pit humans against algorithms will likely introduce new biases and errors into the U.S. health-care system that will not only exacerbate health disparities but also make health care more dangerous for everyone. 

Enid Montague, Ph.D. is an expert on human-centered automation in medicine, Associate Professor of Computing at DePaul University, Adjunct Associate Professor of General Internal Medicine at Northwestern University, and a Public Voices Fellow through The OpEd Project.

Tags Artificial intelligence Electronic health record Health Health equity health system Medical error Physicians Primary care physician

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Healthcare News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video