Diversity

Can AI truly be inclusive? Here's a DEIB perspective

AI is a tool. It is neither inherently good nor bad; it simply reflects the intentions and biases of its creators. And since those creators are human beings who are not free from bias, this mindless universe (despite its name) reproduces those biases to varying degrees.

Now, when AI is part of the “toolbox” of more and more people, including workers and business leaders, the intersection of artificial intelligence with diversity, equity, inclusion, and belonging (DEIB) is urgent.

As organisations formalise their AI and DEIB strategies, the challenge is not to choose between the two, but to align them.

Without careful design, AI can unintentionally perpetuate inequality, marginalise communities, and amplify existing systemic issues.

AI for Equity

Progressive organisations now face a crucial moment to position AI as an enabler of equity. Rather than treat inclusivity as an afterthought, building AI with fairness at its core can strengthen trust, mitigate reputational risk, and reinforce long-term value.

Sandra Rosier, a seasoned DEIB strategist and advisor to senior executives, puts it succinctly: the consequences of ignoring inclusion during AI development can be severe. “There’s devastating damage that occurs when AI is not used or created in an ethical and inclusive manner,” she says.

The issue, she argues, is deeply rooted in the quality of the training data—datasets often shaped by historical prejudice. Rosier notes, “Intent is everything. Inclusivity must be designed into systems from the outset, not tacked on afterwards.”

Real-world outcomes already demonstrate what’s at stake.

From biased hiring platforms to racial disparities in facial recognition used by law enforcement and discriminatory algorithms in financial services, the risks of exclusionary AI are tangible and growing.

Tory Clarke, Co-Founder of executive search firm Bridge Partners, also urges caution. “AI in its current form is simply too immature to be used as a determining factor or as a substitute for human judgment—especially when choosing a leader,” she explains.

Emotional intelligence, contextual awareness, and cultural fluency—core attributes in leadership assessment—remain beyond the scope of today’s models. Clarke warns that overreliance on technology can “simply replicate existing biases under the guise of objectivity.”

These concerns are shared widely within the HR profession. According to the Chartered Institute for Personnel and Development (CIPD), a recent survey revealed:

  • 31.2% of HR professionals believe AI carries the same biases as humans;
  • 30.2% worry it may exacerbate workplace inequality;
  • 38.6% believe it could reduce bias—if managed with appropriate safeguards.

Structural Barriers to Inclusive AI

Despite mounting interest in building responsible AI, organisations continue to face a number of structural and cultural roadblocks that limit inclusive outcomes.

No shared definition of AI

The term “AI” is interpreted differently across sectors, making cross-functional alignment a challenge. Without consistent language or standards, building shared DEIB principles around AI becomes difficult to operationalise.

Lack of diversity in AI development

The AI talent pool remains largely homogenous, especially in Western markets. Teams often consist of white, male technologists, which reinforces an echo chamber effect. As Clarke observes, “When teams lack diverse perspectives, the risk of blind spots rises exponentially. You end up with echo chambers rather than balanced, inclusive systems.”

Bias embedded from the beginning

Homogeneous teams—however skilled—are prone to overlook lived experiences that differ from their own. In sectors like healthcare or insurance, these blind spots can translate into discriminatory practices and outcomes, inadvertently excluding already marginalised communities.

Slow organisational accountability

 Even when flaws are flagged, organisations can be slow to respond. For instance, some facial recognition tools continue to misidentify non-white individuals at alarmingly high rates. Yet the push to remediate these issues often lags behind commercial priorities.

Budget cuts to DEIB during downturns

Economic headwinds frequently lead to cuts in DEIB spending, undermining the very systems needed to safeguard fairness. Clarke challenges this logic: “DEIB shouldn’t be treated as a discretionary cost. During downturns, holding the line on ethical standards signals resilience and clarity of purpose.”

The unseen workforce powering AI

Beneath the surface of sophisticated systems lies an often invisible layer of human labour. Take OpenAI’s use of low-paid workers in Kenya, who were tasked with filtering toxic content to make AI safer for public use. Rosier highlights the contradiction: “We cannot talk about inclusive AI if the people behind the scenes are being excluded from ethical consideration.”

Bridging the Gap: What Leaders Can Do

The good news is that business and HR leaders are uniquely placed to steer AI development toward inclusivity. Here are key actions to consider:

Involve diverse perspectives from the start: Inclusion must begin at the design stage. Bringing in voices across race, gender, disability, and socio-economic status helps ensure systems are built for everyone—not just the majority.

Treat AI risks like ESG risks: Governance models that work for environmental and social responsibility should be adapted for AI. Clarke suggests leaders ask not just “Can we do this?” but “Should we?”

Protect DEIB investments in all market conditions: Maintaining funding for DEIB—even when budgets tighten—sends a powerful message about company values and leadership integrity.

Interrogate homogeneity in tech spaces: Use your influence to demand diversity on panels, in vendor selection, and at AI conferences. Representation shapes innovation.

Use AI to improve—not replace—human judgment: When trained and audited by inclusive teams, AI can support fairer hiring processes, flag unconscious bias, and enhance accessibility.

Embed AI governance into existing structures: AI oversight should sit alongside HR, ESG, compliance, and legal—not operate in a silo. This ensures consistency with broader business goals.

Model leadership in regulation and transparency: Businesses need not wait for regulators to set the rules. Proactively implementing audits, publishing fairness reports, and collaborating with civil society can set a precedent for responsible AI.

Browse more in: