HR Analytics

The hidden cost of tech: Where are ethics, governance, and compliance?

Technology is, almost unquestionably, the symbol of progress. But as we speed ahead, driven by artificial intelligence (AI), that “almost” becomes important, especially in light of an uncomfortable question: Are we losing our ethical principles and human touch in the race for efficiency and innovation?

AI is not just changing the way we work. It’s redefining who can work, how they are judged, and even who gets a second chance. And while a handful of major tech companies bask in the spotlight, shadows grow longer, especially over vulnerable communities and underrepresented voices.

AI: Productivity for some, Prejudice for others

The benefits of AI are countless, and denying them would be like trying to cover the sun with a finger. Powerful companies and wealthy nations exert disproportionate control, while the Global South and marginalized communities remain largely spectators in a game they are not allowed to play.

As former UN High Commissioner Michelle Bachelet said: “We cannot afford to continue playing catch-up on AI: its effects are real and happening right now.”

A clear example: the Dutch tax authority’s attempt to automate fraud detection became a textbook case of digital injustice. A flawed algorithm flagged thousands of innocent families—mostly of foreign origin—as fraudulent, plunging many into financial ruin and family breakdown. What should have been a tool for accountability turned into a weapon of exclusion.

This isn’t just a data error. It’s a governance collapse. Ethics, oversight, and common sense failed dramatically.

Who Makes the Rules?

AI is a global force, but its rules are written by a small club of voices, mostly from the Western world. The governance frameworks emerging from this bubble often ignore the socio-economic realities of billions.

If AI is going to determine access to employment, healthcare, education, and credit, governance must be inclusive by design. It’s about democratising rule-making and ensuring communities—not just developers—have a voice.

Regulatory compliance is not just paperwork, as it might seem. Too many companies treat ethical AI as a PR exercise—a presentation, a slogan, a checkbox. But ethics isn’t a quarterly campaign; it’s a strategic pillar. Like cybersecurity or ESG criteria, it must be embedded into operations from day one.

It’s time to ask ourselves: Do our algorithms reduce bias or reinforce it? Do performance metrics support employees or surveil them into exhaustion?

The impact of AI on the workforce—and on the daily lives of thousands—is no longer hypothetical. From call centers to warehouses, automation is reshaping job functions with surgical precision.

But these aren’t plug-and-play transitions. Without strong upskilling and reskilling strategies, companies risk fueling economic disparity and unrest.

What’s more, algorithmic management: AI tools that track employee productivity, monitor mood, and even trigger firings—is turning workplaces into digitally filtered environments, often defining and not always fair.

Is AI a tool for freedom or weapon for control?

When just a handful of companies control the tools that shape thought, opportunity, and public discourse, we’re not in a marketplace, but in a monopoly. As author Shoshana Zuboff warns, “AI can be a tool for freedom or a weapon for control. We must choose, and we must choose wisely.”

Unregulated, we risk descending into techno-authoritarianism, where decisions are dictated not by elected leaders but by opaque algorithms optimized for shareholder returns.

The idea of hitting the brakes on AI—or at least not flooring the gas—might seem radical.

But not all AI systems are equal. Some can be safely deployed; others—especially those with high social impact and unclear logic—deserve a pause.

Temporary moratoriums, when used wisely, allow regulators to assess risks, gather public input, and ensure systems align with human rights. The goal is not to block innovation, but to steer it away from the cliff.

What is the cost of ignoring this?

It shows up in three key dimensions:

  1. Direct liabilities: legal consequences, fines, and lawsuits.
  2. Operational inefficiencies: flawed systems requiring constant fixes.
  3. Strategic setbacks: loss of trust, talent drain, and time wasted on damage control.

Christina Montgomery, Chief Privacy and Trust Officer at IBM, writes: “We need to ensure AI aligns with our values… We need governance models that are flexible, risk-based, and rooted in transparency.”

Governance: Build it with purpose

Small businesses often think AI governance is a luxury for giants. But in reality, it’s a necessity for everyone. Even modest companies can begin by:

  • Mapping AI risks across all functions
  • Keeping people informed about critical decisions
  • Creating strong incident response plans

This isn’t bureaucratic overhead. It’s insurance against ethical catastrophe. And governance pays off. Ethical, inclusive AI design builds trust, brand value, and regulatory resilience. Far from a hindrance, it’s the scaffolding that enables safe growth.

Companies doing it right

There are brilliant examples of what’s possible when ethics leads the way. Dutch electronics company Fairphone places sustainability and transparency at the core of its business. Its ethical sourcing practices, open supply chains, and worker-rights initiatives show that values and viability are not mutually exclusive.

Mozilla, on the other hand, continues to champion open-source, user-centered technology. Through its responsible AI initiative, it’s embedding fairness and transparency into every line of code.

Public participation and independent audits are built into the development cycle. As Mozilla’s Mark Surman puts it: “We must reclaim the narrative on AI and ensure it’s developed not just for profit—but for people.”

What regulations must cover, and go beyond?

The debate around regulation has intensified. One school of thought argues that existing laws already cover civil rights and consumer protections, and that adding AI-specific rules would only stifle innovation.

From this perspective, a moratorium on new obligations would create breathing room for experimentation and learning.

However, the opposing camp sees this as a dangerous gamble. They argue that large tech firms are using market dominance to avoid scrutiny, and that without strict regulation, we risk entering an era of unchecked algorithmic harm.

A comprehensive critique cited by the AI Now Institute suggests governance must go beyond token audits and require companies to “affirmatively demonstrate they are not causing harm”—just as we expect in sectors like food and pharmaceuticals.

Key priorities include breaking up tech data monopolies, halting biometric and worker surveillance, reforming competition laws, and preventing trade deals from weakening national regulations.

Compliance is not just about avoiding fines—it’s about corporate responsibility.

Some experts call for laws requiring companies to prove they are not anti-competitive or harmful, shifting the burden of proof from consumers to developers.

Others warn of creating a “mountain of red tape” that could stifle innovation and national competitiveness. But governance doesn’t have to mean paralysis. It means clarity, accountability, and shared guardrails.

You may also like:

For HR leaders, the question around AI is no longer 'if,' but 'how'

How quickly can we adapt governance models?  How deeply can we embed ethics into our culture? And how decisively can we act, before the damage becomes irreversible?

But in all of this AI and automation of HR, one key point not to forget is the human touch.

This is an increasingly significant concern, especially among HR and people leaders who are at the forefront of change management.

AI and automation in HR processes are certainly delivering strong results and helping refine strategies. However, as pointed out by 80% of the workforce in Gulf nations, the loss of human connection is one of the biggest risks—alongside cybersecurity threats, data bias, and privacy concerns.

"Is anyone else feeling like they have HR Tech Fatigue?" - asked Claire Selwood, a Dubai-based HR transformation leader. The thought emerges from an overwhelming presence and influence of AI-based tech in HR.

She added, "obsessed with automation so much that we are coding out the very human that makes HR, human..platforms and coding algorithms promising the perfect candidates and smooth hires without any problems..don’t work as well as they promise...back of our minds, we know that we’re replacing human connection with code..are we building a more efficient HR at the expense of a humane one?"

She urged more HR leaders to retain a human-first approach to HR in the age of AI and automation of people and HR processes.

In our feature on humanifying AI, HR leaders echoed that as AI takes over daily operations and repetitive tasks, it creates an opportunity to rebuild genuine human connections.

"..embrace technology to humanize the workplace, and foster cultures where people thrive through constant learning, feedback, and growth." pointed out Hilal Al Jadidi, Omran Group's Chief People & Change Officer. 

One of the clearest applications of AI in HR is automating administrative tasks—freeing people to focus on what only humans can do. For us, that’s a key design principle," underlined Harsha Jalihal, Chief People Officer at MongoDB, as she spoke about the importance of being cautiously optimistic about AI in HR.

Make it a compliance requirement to check the human side of AI-generated data reports and analytics—ensuring a balance between technology, people, and the organisation’s strategic growth ambitions.

Browse more in: