Maternal health is an important conversation during this election season. Three women, along with their loved ones, took the stage and recounted their harrowing reproductive health experiences during the first night of the Democratic National Convention. After being denied care while suffering a miscarriage, “I was in pain, bleeding so much my husband feared for my life,” recalled Kaitlyn Joshua of Baton Rouge, Louisiana.

Childbirth in the United States poses a greater risk compared to other high-income nations. As many as 80% of pregnancy-related deaths are preventable, according to the Centers for Disease Control and Prevention. Artificial intelligence is being used to address maternal health disparities by predicting pregnancy complications, monitoring for fetal abnormalities, identifying high-risk pregnancies, and improving access to care.

The problem with using AI in maternal healthcare, however, is that the technology is often designed without patients of color in mind, which means the quality of care, access to care, and treatment can decrease and even harm birthing people. For example, Harvard researchers found that an algorithm predicted Black and Latina women were less likely to have a successful vaginal birth after C-section than white women. The algorithm bias could lead doctors to perform more C-sections on women of color. After years of research, the algorithm was updated to no longer consider race or ethnicity when predicting VBAC complication risk.

It’s impractical to simply eliminate race and ethnicity from every AI algorithm. These demographic factors play a crucial role in tackling persistent inequalities within healthcare systems. Researchers must be intentional with how and when race and ethnicity data are used in the creation of AI.

Maternal healthcare AI algorithms rely on data. When that data does not represent our most vulnerable populations and is rooted in racist provider practices, biases can emerge in AI maternal healthcare services. When marginalized patients, providers from their communities, and health professionals who have received inclusive training collaborate in the creation of AI innovation, it opens the door to addressing bias and kickstarting the equitable revitalization of maternal healthcare in the United States.

Medical doctors from Cedars-Sinai acknowledged that due to provider bias, Black women are less likely to receive low-dose aspirin treatment to prevent pre-eclampsia, a dangerous hypertensive complication of pregnancy that can cause illness or death. The doctors used AI to identify patients at risk for pre-eclampsia and automate the decision-making about prescribing aspirin. This technology led to an increase in appropriate aspirin treatment and eliminated the racial disparity in care.

Black women are two to three times more likely to die from pregnancy-related causes than white, Asian, and Latina women, regardless of their income and education. Joshua’s story marks the steady drumbeat in the long line of Black women who often feel unseen, undervalued, and unsupported when seeking maternal health services. It is an experience that even Beyoncé and Serena Williams could not evade.

The use of AI in maternal healthcare continues to evolve, and preventing AI bias is crucial not only for the equitable advancement of AI but also for addressing ongoing maternal health disparities in the United States. AI cannot solve the maternal and reproductive health crisis in the United States, but it could pave the path toward equitable care for our vulnerable populations.

Here Are Three Ways To Prevent AI Bias In Maternal Healthcare technology

  • Ensure diverse, representative data to avoid biases: The data used to train AI systems should be diverse and represent all demographics. This involves collecting extensive data from various racial, socioeconomic, and geographic backgrounds to avoid perpetuating any existing biases. By incorporating a wide range of data points, AI systems can deliver more accurate and fair health assessments and recommendations.
  • Embrace a multidisciplinary approach involving healthcare experts, ethicists, and community advocates: This means involving not just data scientists and engineers, but also healthcare professionals, ethicists, and community advocates in the design and implementation process. Such collaboration ensures that the AI systems are developed with a thorough understanding of the real-world implications and nuances of maternal health.
  • Establish transparent AI governance with regulatory oversight for monitoring and continuous improvement. Ensure adherence to ethical standards and encourage open communication between patients, healthcare providers, and AI developers to improve systems continuously. This promotes trust in AI for enhancing maternal healthcare services.
Share.
Exit mobile version