The HR challenge of adopting Responsible AI

To respond to changing AI-related legislation, organisations should move to adopt Responsible AI practices, writes Philippa "Pip" Penfold.
By: | November 20, 2023

The challenges for human resources (HR) in 2024 and beyond may look like recent years, but the way HR needs to respond is changing. A more inclusive approach to Diversity and Inclusion, a heightened focus on Sustainability driven by the climate crisis and the need to prepare leaders to manage an increasingly complex business environment. The prominence of technology will also continue to feature in HR’s foreseeable future and the expectations of HR to leverage the benefits of Artificial Intelligence (AI) are unlikely to ease.

The more recent narrative surrounding the ethical aspects of AI has given rise to the concept of Responsible AI and with it, the need for HR to adopt Responsible AI practices.

Responsible AI is the ethical and legal design, development and deployment of AI systems that benefit individuals, communities, society, and the environment and which has systems in place to ensure it remains inclusive, safe, secure, and reliable.

Although the precise definition of Responsible AI varies, the wider concept remains consistent; AI created, used, and managed for the benefit of humanity and the world in which we live.

HR would readily agree that all AI used in their business should be applied and used ethically. Though often expected to represent the high watermark of ethical standards in their business, HR’s use of AI to date has not been consistent with that position. In a recent review of the scientific literature* aimed at understanding how widespread the consideration of ethical consequences is when HR adopts AI, it found that out of 107 cases of HR using AI, only 11 examined the perception of justice or trust of decisions and outputs among employees or job seekers. It would appear most HR are not behaving in keeping with the ethical considerations of Responsible AI as part of their AI decisions.

In addition to the ethical dimension, HR can embark on the Responsible AI path through the legal aspect. In recent years data has been the focus of new legislation, guiding the appropriate use of personal data in AI models and tools. The Global Data Protection Regulation (GDPR), with its extraterritorial reach, impacted many businesses in Asia. While most countries now have their own data related legislation, very few have adopted legislation specifically addressing AI. This is fast changing however, with the EU AI Act 2023 under discussion, it has the promise to impact companies as widely as GDPR. A flurry of AI related legislation is likely to arise across the world within the next five years.

While most countries now have their own data related legislation, very few have adopted legislation specifically addressing AI. This is fast changing however, with the EU AI Act 2023 under discussion.”- Philippa “Pip” Penfold, Managing Director, Integrating Intelligence.

Staying abreast of upcoming AI related legislation will present HR with significantly more challenges than data privacy and protection legislation. While they triggered the need to update company practices, the challenge to achieve compliance was relatively straightforward, as there was no need to apply change retrospectively. It is unlikely to be that clearcut with AI. The impact of changing an embedded AI tool because it does not comply with new legislative requirements has the potential to significantly disrupt business.

For example, a company using an AI powered tool for recruitment and selection which, by its design, cannot comply with new legislation requiring it to be fully explainable and interpretable, and prove that it is not disadvantaging a minority group, may result in an HR department being compelled to cease using the tool outright. This scenario is not as unrealistic as it sounds.

The pressure for AI models to be explainable and transparent has been mounting for years, and most AI Principles include them. AI Singapore (AISG) has been purporting the importance of fairness, accountability, and transparency in AI systems since its founding in May 2017. Notwithstanding the complexities presented by the boundaryless nature of AI, there are undoubtedly numerous AI tools in use today that have been developed and deployed with little to no regard for ethics, principles, or other standards.

READ MORE: Is “purpose-driven culture” still relevant for 2024?

This could pose a legacy issue for HR, who may need to remove one or more AI tools in use today with the adoption of new legislation.  Responsible AI will undoubtedly present HR with many opportunities and challenges in the foreseeable future and they need to be ready to manage those changes. Wherever your organisation lies on the AI journey, it is never too early to explore and adopt Responsible AI practices.

* Bujold, A., Roberge-Maltais, I., Parent-Rocheleau, X., Boasen, J., Sénécal, S., & Léger, P. M. (2023). Responsible artificial intelligence in human resources management: a review of the empirical literature. AI and Ethics, 1-16.


About the Author: Philippa “Pip” Penfold is Managing Director, Integrating Intelligence.

Join her at CHRO Singapore 2023 on December 7, where she will be moderating a panel discussion titled, HR 2024 and Beyond: Building the Future Organisation Today, which will analyse the key trends that will shape the way organisations work and offer insights into how HR and business leaders can create winning strategies to build future-ready organisations.