AI for human resources

Are businesses liable for AI discrimination?
Shilling Cameron

Cameron Shilling

Automation pervades all business operations, including human resources. Artificial intelligence and automated decision-making (ADS) can be valuable tools for HR professionals, and should be an integral aspect of the HR operations of today and the future. It should not come as a surprise, therefore, that businesses will be held responsible when discriminatory personnel decisions are made either by ADS or by a human relying on AI to help make those decisions.

AI and ADS offer immense benefits for HR professionals. Here are a just few examples.

Review and summarization of job applicant materials for completeness and suitability for available jobs, employer culture and expectations, etc.

Summarization and analysis of interview content and performance.

Assessments of personality, characteristics, skills, aptitudes, etc.

Summarization and analysis of performance based on the entirety of an employee’s work product, including emails, phone and videoconference calls, recorded meetings, interactions with business systems and databases, speed and accuracy, etc.

Recommendations for job placement, advancement, training opportunities, etc.

Monitoring and alerting about employee conduct for disciplinary purposes.

The initial reactions of some HR professionals are that AI would not be useful to them, or that they do not trust AI to generate reliable results. The proof of AI’s utility will be in the pudding, as more and more HR operations use it and as HR professionals who refuse to do so lose touch in the industry. Moreover, studies show that results yielded by AI are not necessarily less reliable than human decision-making, since business managers and HR professionals make decisions that are influenced, consciously and unconsciously, by latent variables and inherent biases.

Finally, existing HR systems and applications already incorporate AI and ADS, and that functionality is only becoming increasingly powerful. Businesses that choose not to utilize it will expose themselves to competitive disadvantages over time.

Just like humans who make personnel decisions, AI can generate such decisions that are based on improper variables, or appear in hindsight to be based on such factors. That occurs because AI models are trained using historical data, and that data embodies biases and trends that existed throughout history. Likewise, AI trained on HR data about the business’s historical and existing workforce, management structure, job descriptions, etc. will invariably incorporate the biases and trends inherent in that data, too. AI also may unintentionally disadvantage certain individuals, such as if ADS disqualifies a candidate because of availability and the candidate’s availability is based on medical or family-care requirements, or if AI rates an employee’s performance in meetings and calls poorly and such performance was due in part to a disability.

Since AI like humans can generate discriminatory results, it is unsurprising that businesses that use AI will be liable for those results. New York City, Illinois and California (effective October 1, 2025) have adopted new or amended existing laws to have that effect. However, do we really need a new law to tell us that? Existing laws already prohibit and punish discriminatory personnel decisions. AI and ADS do not make those decisions, no matter how autonomously they operate. Rather, employers use AI and ADS to analyze data and implement the results yielded by those technologies. If a human relies on technology to help make a personnel decision, the human is still making the decision. Similarly, if an employer permits an ADS result to be effective without human oversight, the employer has made the decision to do so.

Perhaps the most salient aspect of these new AI regulations is that they require businesses to maintain records that can be later scrutinized to determine whether personnel decisions are legitimate or discriminatory. That requirement is consistent with other existing and emerging laws concerning AI generally, which provide broader and more rigorous requirements for the use of AI for HR functions.

The European Union and Colorado adopted broad-based AI laws last year, and other states will almost certainly adopt similar laws this year and in years to come. Those regulations categorize the use of AI for employment decisions as high-risk. That does not mean that such use of AI is prohibited. Rather, to use AI for that purpose, employers must first conduct a risk assessment to identify the risks inherent in the use of AI and implement measures to mitigate those risks.

The fallibility of AI for HR functions is not a reason to refrain from using it. After all, humans are fallible, too, and technological advancement is indispensable for business development and competition. Rather, just like any other new technology, AI for HR must be implemented based on an appropriate risk assessment and mitigation process.

Cam Shilling founded and chairs McLane Middleton’s Cybersecurity and Privacy Group.