Human oversight required: The trust crisis threatening AI’s workplace takeover
- Josephine Tan
As AI agents gain ground in the workplace, Singapore’s workforce is sending a clear message: they are ready to collaborate with AI, but not ready to be led by it. According to Workday’s global research report, AI Agents Are Here – But Don’t Call Them Boss, 83% of employees in Singapore are comfortable teaming up with AI agents—yet only 8% are comfortable being managed by one.
This stark contrast underscores a critical challenge for HR leaders: how to embrace the productivity and efficiency benefits of AI while preserving the human judgment, empathy, and accountability employees trust.
AI as partner—not manager
“HR leaders today are navigating a new reality where they must manage both human employees and AI agents, while ensuring their workforce has the skills for long-term success,” Jess O’Reilly, General Manager, ASEAN, Workday, told HRM Asia. “For AI agents to scale in deployment, HR leaders must redesign work so that technology augments, not replaces, human judgment.”
Singapore’s workforce is broadly optimistic about AI—with 79% of organisations already rolling out or operating AI agents—but employees clearly distinguish between acceptable and unacceptable applications. They are far more comfortable with AI recommending new skills (50%) than using it in sensitive areas such as hiring (15%), finance (23%), or legal matters (13%).
“As much as employees recognise the value of AI, they draw clear lines on where it should be used—especially in sensitive areas such as hiring, finance and compliance,” O’Reilly noted.
While nearly 90% of employees in Singapore believe AI agents will boost productivity, the report also reveals deeper anxieties: 50% fear a decline in critical thinking, and 33% anticipate a drop in the quality of human interaction.
O’Reilly warned that rushing AI deployment without clear human-centric guardrails carries significant organisational risk. “The greatest risk for any organisation deploying AI agents without clear boundaries is the erosion of critical thinking and an overreliance on AI,” she said.
She also highlighted a growing concern: “agent sprawl,” where organisations deploy too many agents with poorly defined roles. This can quickly lead to compliance, security, and cost challenges. O’Reilly emphasised the need for a centralised system of record to manage AI agents and maintain oversight: “Define roles, ensure compliance, and make sure humans remain accountable at every key decision point.”
Why trust—and governance—matter more than technology
Employees in Singapore overwhelmingly agree that trust in AI requires strong regulation. All respondents said they want some form of oversight, with 57% preferring ethical guidelines set by developers and 48% favouring strict human oversight.
O’Reilly noted that strong governance frameworks must go beyond policies and documentation—they must actively centre employee experience and trust. “Organisations should prioritise employee perspectives by developing AI-related policies in consultation with them,” she said. “Successfully navigating these sensitivities requires thoughtful, context-specific implementation strategies and strong change management.”
She emphasised the value of continuous feedback: “Employee feedback loops can help catch unintended outcomes early and reinforce trust, while regular independent audits ensure fairness and correct potential biases.”
READ MORE: No AI boss: Employees in Singapore demand human leadership despite tech embrace
A single source of truth is equally crucial. “Ensuring a centralised repository of employee data helps build consistency, accuracy, and transparency in how AI agents operate,” she added. This alignment enables HR, IT, and operations teams to work from a common data foundation.
O’Reilly pointed to Workday’s own practices as an example of intentional, people-centric AI deployment. “We’ve been deliberate in how we deploy AI. We must ensure that we are always keeping people at the centre and using technology to unlock human potential, not replace it,” she said. Workday’s Everyday AI initiative focuses on helping employees work smarter and focus on higher-value tasks.
She also highlighted a regional success story: Scoot. “As an early adopter, Scoot built a unified HR foundation on Workday to manage talent from acquisition to development,” she shared. “By leveraging a single source of truth for their data, they can make data-driven decisions, create custom employee experiences, and foster a culture of continuous innovation.”
But she noted that leadership behaviour matters just as much as technology. “Alongside implementing AI tools, intentional leadership is essential to build trust. Leaders should be transparent, empathetic, and clear about how AI is used, and they should listen continuously through feedback channels,” she said.
The findings from Workday’s research reflect a broader truth: AI will reshape Singapore’s workplaces, but employees expect organisations to implement it responsibly, transparently, and with a human touch.
“We’re entering a new era of work in Singapore where AI is an incredible partner to organisations today, complementing human judgment, leadership, and empathy,” she concluded. “To drive productivity and trust, it is important that we rely on AI as a partner rather than a leader.”


