AI Guide for HR Professionals

There is a recent proliferation of AI use in the workplace, including during recruitment and personalised onboarding, automated task allocation and performance management.

AI can enable HR to streamline its collection and analysis of data within HR processes.

Although exciting opportunities arise, legal risks must be considered to ensure compliance.

AI Bias and Discrimination Risks

Just as humans can be biased, AI can exhibit unconscious bias when used to guide decision-making around potential hires, promotions or redundancies.

For recruitment, an AI screening system searches resumes and cover letters for the ‘right’ experience, words and phrases. For restructuring, AI can provide insights by analysing employee data such as performance metrics, attendance records, and skill sets.

Although AI tools minimise potential favouritism by relying on objective data, they lack human judgement, such as consideration of an employee’s personal circumstances or other extenuating factors.

Further, AI is only as effective as the data that it is trained on; if the data is biased, the resulting system will be too. This can automate and entrench discriminatory decisions. For example, in the USA, Amazon’s AI recruitment algorithm was biased against women because the AI system ‘thought’ male applicants were consciously preferred.

The system had ‘learned’ to judge if someone was suitable by analysing the resumes of previous candidates, and since women are still underrepresented in STEM roles, the system penalised resumes from female applicants, entrenching bias.

A significant challenge with assessing potential bias is that the algorithms are often complex and opaque. However, organisations can take action to mitigate discrimination risks by asking:

  • What characteristics or data points will this AI tool be analysing?
  • Are these characteristics protected by workplace laws, such as laws that protect attributes including an employee’s sex, age, race or disability?
  • Have we run this tool against any data samples to evaluate whether it might result in a disparate impact on any protected group?
  • What do any licensing or purchasing agreements state about indemnification or contribution if the organisation is subject to legal claims relating to its use of this tool?

The reality is that disgruntled applicants or employees will seek to blame the organisation, not the AI or AI vendor, for breach of discrimination laws, if the AI was flawed or poorly programmed.

Misinformation, Accuracy and (Un)reliability of AI

Generative AI works as a language learning model (LLM) that is used to connect words together, but it doesn’t verify whether the underlying information is accurate or not. As a result, AI can give wrong answers convincingly (for example, the notorious instance of a US lawyer citing fake cases in court).

Generative AI requires human oversight, but attention may fall away as workers become too complacent and trusting of the technology. 

For example, it was reported in November 2023 that a group of academics embarrassingly made a submission to a parliamentary inquiry that unintentionally drew upon ‘information’ sourced by AI that generated false allegations of serious wrongdoing by the big four consultancy firms. 

Relying on the Google Bard AI tool, the academics accused KPMG of being complicit in a “KPMG 7-Eleven wage theft scandal” and of auditing the Commonwealth Bank during a financial planning scandal when KPMG never audited that bank. 

AI is still relatively new, and people are typically cautious about using it in the course of their work. However, once the familiarity level with AI increases, the risk associated with reliance on false information generated by AI may actually increase in time.

Another misinformation issue to consider is that candidates may use generative AI to enhance resumes, which may lead to over-reliance on superlatives and produce inaccurate and misleading information. This will require recruiters to do more research to verify candidates. 

Important actions include:

  • Establishing clear written guidelines about AI-use in the workplace and not posting AI-generated content without human review;
  • Setting expectations for employees not to be complacent when using AI at work; and
  • Using employment agreements that address what happens if an organisation learns, after hiring a candidate, that the person has included false information in their application. 

Privacy Risks

Privacy and cybersecurity risks arise, including in relation to the collection, storage and dissemination of data that AI produces.

AI regulation in Australia is currently limited. In June 2023, the federal government released a discussion paper, Safe and Responsible AI in Australia.  The EU AI Act (2021) has been referenced as a guide to a possible regulatory framework.

When using AI in the workplace, there is the risk of unauthorised incorporation of user data and leaking of confidential information. This occurs when AI model users input data as queries, and this data may become part of the model’s future training dataset. Such data can be disclosed as outputs to other users’ queries. This is a particularly serious issue if users input sensitive data into the AI system.

In a now-famous example, in early 2023, three different Samsung employees unwittingly leaked highly sensitive company information to ChatGPT when using ChatGPT to assist with tasks. That top secret data effectively became part of OpenAI’s training data as the company behind the AI service. 

Many vendors, including OpenAI, are cracking down on how user inputs are incorporated into future training, but still, there’s no guarantee that sensitive data will remain secure and outside of future training sets.

Important actions include:

  • Obtaining consent from applicants before using any AI in video interviews (or in any other context such as meetings) and deleting such data on request; and
  • Establishing clear written guidelines for employees to protect from unintentional leaking of confidential information when using AI in the workplace.

Feel free to reach out to WilliamsonBarwick if your organisation is grappling with these issues, and we can assist.

Similar Posts