Artificial intelligence has become part of our everyday lives. From the algorithms that recommend Netflix shows to the systems used by companies to screen job applicants, AI is embedded in one out of three decisions that affect our personal and professional futures. However, while AI may promise efficiency and objectivity, it often mirrors the biases of the societies that create it. Far from being neutral, AI can reinforce social inequalities and perpetuate stereotypes, especially in areas like race, gender, and culture. The issue isn’t just about faulty technology—it’s about how AI amplifies the flaws in the human systems that shape it.
Our bias has crept into AI, but it doesn’t have to remain there.
How Bias Sneaks into AI
Imagine applying for a job at a top company. You’ve polished your submission, nailed the interview, and feel confident. But behind the scenes, the company’s AI screening system has filtered you out based on a subtle bias in the data it was trained on. If that AI was trained on CVs from a homogenous pool of applicants—largely white and male—it may unknowingly favor candidates with backgrounds similar to that data set, penalizing people with different names, backgrounds, or experiences.
Or think about the voice assistants many of us use daily, like Siri or Alexa. For years, studies showed that these systems struggled to understand certain accents—especially non-American, non-European ones. These AI assistants were designed and trained predominantly using voices that reflect specific linguistic norms, leading to frustration for millions of users with different accents. This bias may seem unimportant until you realize its impact, or experience it firsthand.
The old saying ‘Garbage in, garbage out’ sadly still holds. Bias sneaks into AI systems through flawed data. Are not only increasingly wired, but more and more weird? Much of AI’s training data comes from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies, a term coined already in 20210 by researchers . These populations represent only about 12% of the world, yet they dominate the datasets used to train AI. When algorithms rely on such limited samples, they fail to account for the diversity of human experience, leading to outputs that unfairly favor certain groups over others.
AI’s Reflection of Societal Bias Has Consequences
The bias in AI systems is not an abstract issue—it has harsh consequences for people’s lives. One striking example is facial recognition technology, which is now used in everything from unlocking smartphones to surveillance. Facial recognition systems consistently perform worse for people with darker skin tones as found in research from MIT’s Media Lab. The systems tend to have had error rates as high as 35% for darker-skinned women, compared to just 0.8% for lighter-skinned men. This disparity occurred because the algorithms were trained on datasets that heavily featured lighter-skinned individuals.
Imagine what this means beyond the lab. For example in law enforcement, facial recognition systems are increasingly used to identify suspects. If these systems are more likely to mis-identify people of color, wrongful arrests, and unjust convictions follow, further entrenching racial discrimination in the criminal justice system.
Large Language models consistently perpetuate harmful stereotypes as shown in a 2024 study of Stanford’s Human-Centered AI Institute. When these systems process names that sound African American, they are more likely to link them to negative traits such as criminality or violence. Meanwhile, names that sound European are associated with more positive traits. This is not simply a technical flaw—it reflects centuries of racial bias embedded in our language and culture, which AI systems are now learning from.
The Issue Isn’t Just the AI—It’s Us
The problem isn’t that AI systems are consciously biased; it’s that they reflect the biases of the humans who build and train them. Data, after all, is a product of human experiences, and choices. If AI developers rely on skewed data or fail to account for the diversity of human experiences, the systems they build will inevitably be biased. This bias, often subtle and unintended, perpetuates the inequalities that already exist in society. GIGO.
For example, AI-powered hiring tools are designed to streamline recruitment by sifting through thousands of applications. But if these tools are trained on data from a company’s past hires—who might be overwhelmingly male or from similar educational backgrounds—they will continue to favor those same candidates, effectively shutting out more diverse applicants. It’s a classic illustration of AI reflecting the status quo, rather than creating a more inclusive future. If we want outputs that reflect high values, we need to manifest these values in our behavior. Values in, values out – VIVO.
Language Models and Everyday Racism
The bias in AI systems also manifests in everyday interactions through AI-powered language models. Take chatbots, for instance. These AI systems are trained on text data scraped from the internet—a space filled with both valuable information and deeply entrenched prejudices. Black names were disproportionately associated with negative attributes when processed by these models, reinforcing harmful stereotypes without users even realizing it as shown by a recent study of Stanford’s Human-Centered AI Institute.
When you interact with AI every day—whether through a chatbot, search engine, or predictive text—you may not see the bias at work. But over time, these subtle, persistent stereotypes reinforce existing prejudices. The worst part is that we get gradually desensitized, and no longer even notice the artificial bias that silently influences our own decisions, shaping our personal lives, our careers, and ultimately our future.
Moving Toward FAIRer AI: A Practical Framework
Addressing AI bias requires more than technical fixes. It calls for a deliberate grasp of how data, algorithms, and human biases intertwine. Here’s a tangible framework to guide individuals, institutions, and policymakers in creating more equitable AI systems—FAIR: Fair Data, Audits, Inclusivity, and Regulation.
F – Fair Data
Start by ensuring that the data-feeding AI systems are as inclusive and representative as possible. For instance, a company building a hiring algorithm should ensure its data reflects a diverse range of applicants—across gender, race, socioeconomic background, and education level. Collecting fair data requires a conscious effort to include underrepresented groups and ensure that AI doesn’t perpetuate biases rooted in past decisions.
Example: Amazon scrapped a recruiting algorithm that downgraded CVs from women. The system had been trained on past hiring data, which was male-dominated, leading it to favor male candidates.
A – Audits and Accountability
Regular audits can uncover hidden biases. Independent third-party audits may help identify where and how AI systems may be favoring certain groups or outcomes. These audits should be mandatory for AI systems, especially those used in critical areas like healthcare, law enforcement, or hiring.
Example: A credit scoring algorithm might be audited to ensure it’s not disproportionately denying loans to applicants from minority backgrounds. Transparent audits will hold AI developers and users accountable for deploying their systems.
I – Inclusivity in Design
Diversity is not an abstract concept. Building diverse AI systems becomes more natural when the teams that design them are diverse themselves. Developers from varied backgrounds bring a wider array of perspectives and are more likely to catch biases that others might miss. A multicultural transdisciplinary AI development team can create systems that better serve the full spectrum of users.
Example: If AI development teams systematically include more women, people of color, and individuals from different socioeconomic backgrounds, they are less likely to overlook biases that could harm underrepresented groups.
R – Regulation and Ethical Standards
Finally, clear regulations and ethical guidelines are needed to govern the use of AI in sensitive areas. Governments should enforce standards that promote transparency, fairness, and accountability. These regulations must evolve as AI technology advances, ensuring it serves the public good.
Example: Europe’s General Data Protection Regulation has set a precedent for enforcing data privacy and protection. Similar regulations should apply to AI systems, requiring transparency about how decisions are made and
A FAIRer Path Forward
Bias in AI is a ticking time bomb. If we continue to integrate it in ever more decision-making processes we must act now to make it fair, before we become oblivious to its skewness.
The consequences of failing to put guardrails in place are all too real and felt already. AI systems that truly reflect and serve the diversity of the world we evolve in will not happen by itself. It requires hybrid human choices.