Is AI the new gatekeeper? Bias concerns rise
- Josephine Tan
As generative AI continues to revolutionise HR processes, concerns are emerging about its potential to perpetuate increasingly prominent biases in hiring practices. A recent poll conducted by HRM Asia revealed that fairness in hiring is the top concern among HR professionals, with 33% citing it as their primary worry.
Elvin Goh, People Developer at Nanyang Polytechnic, echoed these sentiments, emphasising the potential for AI systems to exacerbate existing inequalities. “The primary worry is that AI systems, if not properly designed and monitored, can perpetuate or even exacerbate existing biases,” he explained. “These biases can stem from the data used to train AI models, which may reflect historical inequalities and prejudices.”
One key concern is the potential for AI to perpetuate biases present in the data used to train it. For instance, if an AI system is trained on data that predominantly features successful candidates from a particular demographic, it may unfairly favour similar profiles in future hiring decisions. This can perpetuate existing inequalities and discrimination.
READ MORE: Improve healthcare and empower people transformation with GenAI
Furthermore, the lack of transparency in AI decision-making processes can make identifying and correcting these biases challenging. Goh noted that without clear insights into how AI systems evaluate candidates, ensuring that all applicants are judged relatively and equitably becomes difficult. “This opacity can lead to mistrust among both jobseekers and HR professionals,” he added.
Another potential issue is AI’s limitations in considering factors that may not be easily quantifiable, such as recent events or personal circumstances that could significantly impact an applicant’s performance. Goh highlighted the risk of unfair decision-making if AI filtering does not consider these factors.
As AI continues to play a more significant role in HR, organisations must address these concerns and take proactive steps to ensure that AI systems are designed and used ethically. This includes carefully curating the data used to train AI models, promoting transparency in decision-making processes, and regularly auditing and evaluating AI systems for bias.
For more news and analysis on the latest HR and workforce trends in Asia, subscribe to HRM Asia and be part of the region’s largest HR community!