Your AI needs a conscience. HR holds the key.
- Josephine Tan

“Leaders should not underestimate the amount of human judgment, contextual understanding, and ethical foresight required to successfully deploy AI systems. You can’t outsource that thinking to a model.” – Mary Wells, Chief Marketing Officer, Cloudera
In boardrooms and team huddles across the globe, the conversation is dominated by two letters: AI. The relentless advance of AI is placing unprecedented demands on organisations, creating a dual pressure to innovate at lightning speed while navigating a minefield of ethical, security, and governance challenges. While many see this as a purely technological race, forward-thinking leaders argue that success is determined not by algorithms alone, but by the human principles guiding them.
At the forefront of this conversation is Mary Wells, Chief Marketing Officer at Cloudera, a data company that believes “data can make what is impossible today, possible tomorrow.” Wells contends that for AI to be truly transformative and sustainable, it must be built on a foundation of human-centric leadership and ethical design. However, the challenge of architecting an ethical culture, fostering genuine inclusion, and developing leaders capable of navigating this new terrain falls within HR’s domain, transforming the function from a support role into a strategic necessity for the AI era.
In the frantic push to deploy AI, the temptation is to focus on short-term wins and technical capabilities. Wells cautioned against this reactive posture, advocating for what she calls “above-the-line thinking”, a leadership mindset that elevates the conversation from siloed problem-solving to strategic, enterprise-wide foresight.
“Above-the-line thinking means moving beyond reactive decisions and focusing on the bigger, more strategic questions,” Wells told HRM Asia. “In AI, this is about aligning innovation with long-term business goals, not just technical capabilities. It requires leaders to take clear ownership, encourage open dialogue, and promote accountability across teams, ensuring everyone feels responsible for ethical outcomes.”
This approach becomes critical as AI matures beyond isolated experiments. “As organisations move advanced AI systems from experimental pilots into widespread use, it becomes critical that these systems remain explainable, well-governed, and compliant with relevant standards,” she noted.
But how does an organisation translate these lofty ideals into practice? Wells argued that it starts with the data itself. A flawed data foundation will inevitably lead to flawed—and potentially biased or non-compliant—AI outcomes.
“Cloudera emphasises that ethical AI begins with a strong data foundation,” Wells said. “Many issues around bias and privacy stem from poor data governance or lack of transparency.” She pointed to the necessity of giving organisations full control over their data lifecycle, especially in a world of complex hybrid and multi-cloud environments. This is where technical architecture and corporate culture must intersect.
Cloudera’s framework for this is its Shared Data Experience (SDX), which centralises and enforces consistent security, governance, and lineage policies across an organisation’s entire data landscape. While this sounds technical, its implication for HR and culture is profound. It ensures that ethical rules are not left to individual interpretation but are built into the system, creating what Wells calls “responsible data use by design.”
Additionally, the risk of embedding societal bias into AI systems is one of the most significant challenges of our time. According to Wells, the solution runs far deeper than simply “cleaning” a dataset; it requires a fundamental rewiring of how organisations approach inclusion.
“When inclusion is treated as a strategic imperative, it’s reflected in the decision-making process, who has a seat at the table, and whether people feel empowered to contribute their ideas and challenge assumptions,” she explained. “In the technology space, true inclusion is evident when diverse voices are shaping not just product features, but product direction.”
This requires leaders to confront their blind spots. Wells recounts a recent discussion with Dr Maya Dillon in Ireland where they explored this very topic: “What are the biggest blind spots leaders have when it comes to AI strategy and governance – and what does AI-first leadership look like in practice?”
A truly inclusive organisation, Wells suggested, measures success differently. “They go beyond hiring statistics; they look at who’s leading projects, whose ideas are funded, and who’s getting promoted.” This is a direct challenge to HR leaders to scrutinise their own processes. Are succession plans identifying a truly diverse slate of future leaders? Do innovation initiatives actively solicit and fund ideas from underrepresented groups? And critically, are leaders modelling the right behaviour by “listening first, inviting challenge, and creating an environment where people feel respected”?
Empowering human judgment in an automated world
Perhaps the most critical message for any organisation is that AI is a tool to augment human capability, not replace human accountability. Wells is adamant that the most sophisticated model is no substitute for human oversight and ethical reasoning.
“Leaders should not underestimate the amount of human judgment, contextual understanding, and ethical foresight required to successfully deploy AI systems,” she warned. “You can’t outsource that thinking to a model.”
This philosophy demands a culture of psychological safety, where employees at all levels feel empowered to act as ethical guardians. “Clear accountability is essential; every team member, from developers to marketers, must understand their role and what they’re responsible for regarding AI outcomes,” said Wells. She went further, asserting that teams “must have the authority and the confidence to intervene, halt, or redirect an AI initiative whenever ethical or safety considerations arise.”
READ MORE: When AI outthinks, leaders must out-empathise: The new mandate for human connection
And in a competitive talent market, employees are increasingly drawn to organisations that offer more than just a paycheck; they seek a sense of purpose. Wells believes that the ethical application of technology can be a powerful engine for employee engagement and a defining part of an organisation’s legacy.
“The legacy I want to see is one where innovation and empathy are not at odds, but in sync,” she said. “A legacy where human-centric leadership isn’t a soft skill, but a strategic advantage that drives ethical, inclusive, and sustainable progress.”
She brought this vision to life with powerful examples. Cloudera has collaborated with the NGO Mercy Corps on an AI-powered tool to help humanitarian responders act earlier on agricultural crises. The goal is simple and profound: “Leveraging AI to feed starving people.” The organisation has also worked with a pharmaceutical firm to accelerate drug development and personalise treatments, a project that both reduces R&D costs and improves patient outcomes.
These initiatives are complemented by internal programmes, such as the global Women Leaders in Technology programme, designed to foster inclusion and action within the industry. This is where purpose becomes strategy. For HR leaders, weaving these stories into the corporate narrative—from recruitment to onboarding to internal communications—is key to building a workforce that is not only skilled but also deeply motivated.
As Wells concluded, the decisions made now will have lasting repercussions. “The decisions we make today are going to shape both the market and society at large – it will impact the way we work and live. I hope future generations look back at this moment as the time when we chose to lead with integrity. When we didn’t chase what’s possible, but paused also to consider what is the right thing to do.”