Select your language

Article

Could medical chatbots encourage healthcare biases?

Posted on the 8th June 2023

Chatbots

Medical chatbots have become increasingly popular since the COVID-19 pandemic.

While many health systems are using artificial intelligence (AI) to create virtual symptom checkers, their prominence has given way to wider concerns about bioethics and patient privacy.

Now researchers from the University of Colorado (CU) Medical School have warned of the ethical issues surrounding the broader use of this technology in a medical setting.

A research team led by Professor Matthew DeCamp has surveyed over 300 people and interviewed 30 more about their interactions with medical AI.

Their research, published in the Annals of Internal Medicine, challenges medical professionals to examine chatbots through a health equity lens and assess whether the technology genuinely improves patient outcomes.

Early on, researchers noticed that patients’ perception of their chatbots’ race could have an impact on their disclosure and willingness to follow healthcare recommendations, with chatbot users being more likely to speak openly to a chatbot they perceived to share their ethnicity.

Although some chatbots have faceless health system logos or cartoon characters as avatars, others could feature digitized versions of a patient’s physician, sharing their likeness and voice.

Strategically designed avatars could also increase trust among underrepresented and underserved patients, with DeCamp explaining “That’s more demonstrative of respect, [and] creates more trust and more engagement. That person now feels like the health system cared more about them.”

However, scientists have also questioned whether healthcare systems should be using avatar design to manipulate patients or influence their health decisions.

According to internal medicine professor Annie Moore, chatbot creators must consider the ethics of medical marketing, bias, and ‘nudges’ – design tweaks to influence customer behaviour. "If chatbots are patients' so-called 'first touch' with the health care system, we really need to understand how they experience them and what the effects could be on trust and compassion.”

The paper also highlighted that medical avatars can reinforce social stereotypes – for example, chatbots exhibiting stereotypically ‘feminine’ features or behaviour could contribute to prejudices about women’s role in healthcare.

While the CU team found that the ethical dilemmas posed by medical chatbots mirror those previously identified in in-person medical settings, they stressed the importance of ensuring that medical technology doesn’t restrict patient choice or mislead patients.

The researchers have also called on the medical community to recognize the potential implications of AI tools: “Addressing biases in chatbots will do more than help their performance, If and when chatbots become a first touch for many patients' health care, intentional design can promote greater trust in clinicians and health systems broadly."

Related: Industry leaders brand AI advances ‘dangerous’

Ready to reach your potential?

Discover how G&L can help you do business better

Contact Us