The human-centred AI revolution in HR: Beyond efficiency to empowerment
- Josephine Tan

“As organisations consider new investments in AI to help improve their HCM’s efforts, we want to help them ask the right questions in their process and to their providers so they can figure out how to do it responsibly, reduce risks, and take the next steps with confidence.” – James Saxton, Vice-President, Global Product Ambassador, Dayforce
The drumbeat for AI in HR is getting louder. Eager to optimise processes and boost productivity, HR leaders are increasingly turning to AI-powered solutions. But in the rush to adopt this transformative technology, a critical question emerges: Are we defining success too narrowly? As we embed AI into the core of human capital management (HCM)—a domain fundamentally about people—are we confronting complex ethical questions with enough courage, or are we still mesmerised by what AI can do, rather than what it should do?
For James Saxton, Vice-President, Global Product Ambassador, Dayforce, the answer lies in shifting the entire conversation. The goal is not simply to build more efficient systems, but to create a more empowered, valued, and augmented workforce. It is a vision that challenges the industry to move beyond a purely technical implementation of AI and towards a deeply human-centred one.
The true measure of AI’s success in HCM is not found in spreadsheets tracking process speeds. It is found in the daily experiences of the people it is designed to serve.
“AI is certainly transforming the way businesses operate, but it can’t replace what makes organisations thrive: their people,” Saxton told HRM Asia. “Organisations planning to implement AI shouldn’t just measure its success with efficiency, but also on the value it adds to the employees’ lives – how AI is augmenting and making their work life better.”
This perspective requires a shift away from a simplistic delegation mindset, where tasks are merely offloaded to machines. Instead, it champions an augmented model, where AI acts as a powerful partner to human talent. This is a critical distinction. A lot of technology, Saxton noted, still requires the human touch.
“AI is only as good as the people who use it,” he said, “and workers with strong employee experiences are more likely to embrace technologies and adapt quickly to change, resulting in a better return on investment.”
Achieving this return involves more than just mandating usage. It requires fostering a culture of co-creation where employees are inspired to “actively research, suggest new use cases, run pilots, and ensure they are well-informed about AI ethics and compliance.” This deeper engagement transforms the workforce from passive recipients of technology into active participants in its responsible evolution.
Many organisations start their AI journey with immense enthusiasm. They see the potential for AI agents to summarise information, recommend actions, or streamline complex workflows. The use cases are identified, and stakeholders are on board. Then, progress grinds to a halt.
“They may even have use cases ready and have got buy-in from their stakeholders, and all they need to do is source solutions and implement them. But this is where they usually get stuck,” Saxton observed.
The roadblock is not a lack of vision, but a lack of a clear, responsible path forward. Leaders are confronted with a tangle of risks surrounding compliance, data privacy, security, and ethics. Faced with these challenges, organisations either lose momentum or press ahead recklessly, hoping to fix problems as they arise—a strategy that can lead to significant legal and reputational damage.
To help organisations navigate this critical stage with clarity and confidence, Dayforce developed a practical framework to provide focused, actionable guidance. “As organisations consider new investments in AI to help improve their HCM’s efforts, we want to help them ask the right questions in their process and to their providers so they can figure out how to do it responsibly, reduce risks, and take the next steps with confidence,” Saxton stated. This framework is built on five core “non-negotiables.”
The framework for responsible AI: The five non-negotiables
These five principles provide a roadmap for any HR leader seeking to adopt AI in an ethical and effective manner.
1. Training
This is the fundamental step. It is not just about teaching people which buttons to press; it is about preparing them for a new way of working. This involves building technological literacy and ensuring employees see AI as a complement to their skills, not a replacement. Training must include robust education on AI ethics and be tailored to different roles and learning styles.
2. Compliance
The regulatory landscape for AI is a moving target. What is permissible today may not be tomorrow. An effective AI solution must be agile enough to navigate this shifting terrain. Organisations should benchmark their standards against a “high-water mark” for regulation, such as the EU General Data Protection Regulation (GDPR), but also ensure their chosen platform can adapt as new regional frameworks emerge.
3. Flexibility
AI is not a blunt instrument. Organisations need granular control over their deployment. A one-size-fits-all approach is doomed to fail. Leaders must have the ability to toggle AI capabilities on or off for specific geographic regions, user roles, or even individual employees who may wish to opt out. This flexibility ensures the technology aligns with both regulatory requirements and organisational governance.
4. Transparency
The “black box” of AI is a major source of concern. To ensure fairness and mitigate bias, organisations must demand transparency. “You need to know how the AI solution arrives at its conclusion,” the Dayforce report advised. This explainability is crucial for building trust and for monitoring model quality over time to prevent drift, where performance degrades as it encounters new real-world data.
5. Protection
The data within an HCM system is among the most sensitive in any organisation. Therefore, any AI solution must uphold the highest standards of privacy and security. It should be built on a Privacy by Design methodology, which embeds these protections into the technology from the ground up. Vendors must be able to provide clear, documented proof of their data handling, training protocols, and audit processes.
Closing the skills gap and building confidence
A framework is only as good as the people who use it, and a significant AI skills gap exists. Saxton pointed to alarming statistics: a Skillsoft survey found 43% of employees see AI/ML as their biggest skills gap, while a Pluralsight survey revealed only 40% of executives offer formal IT training.
“Innovation is a cornerstone of culture, but it can’t be fully realised without equitable access to and awareness of technologies,” Saxton shared.
READ MORE: The high cost of disengaged employees: Why culture matters more than ever
Closing this gap requires a two-pronged approach. First, organisations must implement the tailored, role-based training programmes mentioned earlier, ideally using a learning management system (LMS) to track progress and ensure completion. Second, they must create an AI “sandbox.”
“Once training is set, building an AI ‘sandbox’ will help users test what they’re learning,” Saxton recommended. This controlled environment provides employees with a safe space to experiment, apply their training, and develop practical skills and creative confidence before the technology is deployed organisation-wide.
Looking ahead, Saxton hopes Dayforce will have fostered a global conversation centred on trustworthy innovation. At its core is the value of optimism—not a blind faith in technology, but a confident optimism born from preparation.
For HR leaders, navigating this new frontier, his advice is clear and resonant.
“As HR leaders explore more of what AI can do for them and their organisation, they must act as stewards of thoughtful adoption to ensure that these tools are used ethically,” he concluded. “By asking the right questions and being committed to trust and employee focus, HR can transform how people work with the support of AI technology to help assist, automate, and augment your people’s performance and effectiveness at work.”