Bridging the equity gap: Can AI be trained to resolve issues?

For progress to be made in ensuring healthcare is available to and effective for all, the industry must first tackle AI’s cognitive biases and make it ‘less human.’



Something that’s non-human yet possessing human-like qualities sounds potentially dystopian, yet artificial intelligence holds the ability to help us overcome the uniquely human challenges that hold us back. One such challenge is diversity and inclusion.

As I've travelled around the world to meet with customers and attend health technology conferences, I've heard many people talking about AI in healthcare. It's the topic du jour of digital healthcare. Missing from the conversation, however, was how to ensure that AI functions equitably.

AI’s limitations and caveats

The biggest issue with generative AI is that, like a robot, it only learns based on the knowledge it can source from. And when its foundations have been built on the cognitive bias of the humans developing it and the information available to it from all corners of the Internet, AI’s potential in healthcare is fundamentally limited.

Take this example: ChatGPT could not comprehend that a woman could be a doctor or a professor and it could not solve a simple situational riddle. Without diversity built into its core, AI will never help us fully achieve one of the greatest challenges in healthcare.

Inequities in healthcare are felt worldwide. This needs to be addressed, and there’s no simple answer. One crucial factor, though, is the diversity of the healthcare workforce itself.

In the U.S., only about 5.7 percent of physicians identify as non-Hispanic Black or African American, while the population at large has roughly double that percentage, according to the Office of Minority Health.  Layer on that a profession that has skewed white and male for generations, and you can see how the results might lack validity.

Negative bias has long been an issue in healthcare and continues to this day. In 2021, the maternal mortality rate for African American women in the United States was 2.6 times the rate for women of European descent (white). Even widely used medical algorithms are known to contain significant bias fundamentally built in against African Americans.

In New Zealand, a 2022 study found that Māori are more likely to be triaged into less urgent triage scores when accessing urgent care and are approximately twice as likely to die either in emergency departments or within 10 days of discharge.

We see first-hand with our digital front door product in Canada that how you advance community health for different communities varies widely across those communities – whether they be rural, white middle class, homeless or First Nation groups. If AI doesn't have the opportunity to learn the full breadth and depth of cultural significance, then how can we expect it to communicate with all communities appropriately?

AI’s role in reducing equity gaps

Globally the gap in health equity is widening, but there’s hope. After all, AI is still in its toddler stage, still learning how to walk, talk and comprehend. There’s time to shape it with the open and inclusive mind it needs if it’s going to be useful for achieving equitable healthcare.

Generative AI is a reflection of the information available to it, so until it’s given access to broad and unbiased information, its outputs aren’t getting past the toddler stage on key issues such as equity and diversity.

Diversity and inclusivity are important everywhere and especially important in healthcare. Securing ease of access to fair and equitable healthcare is paramount, and it can be achieved through tools such as digital front doors, which put people in front of their own healthcare journeys, and by utilizing data to identify gaps in care and target resources effectively.

All humans exist with bias, which means creating an AI that is not built on bias is essential. Unconscious bias training exists for people in leadership; could an AI equivalent exist?

If we have any chance of utilizing AI to its full potential in health, we need to develop AI that can understand the nuance of human healthcare needs, and approach this delicately.

This isn’t to say that AI hasn’t shown us its use in emotionally simple (often administrative and organizational) work; it just means that it needs to consider all people within its decision-making before we use it for more. Until this happens, its potential will remain fundamentally limited.

Chris Hobson, MD, is chief medical officer at Orion Health.

More for you

Loading data for hdm_tax_topic #healthcare-equity...