AI Use in Talent Acquisition: Pitfalls and Liabilities
By: Phillip Oliver
Topics
TL:DR
Article
Everyday headlines are touting how Artificial Intelligence and generative AI solutions are going to completely transform work, including Talent Acquisition. Unfortunately, they all fail to explain the risks.
While it’s true AI is going to shake up Talent Acquisition processes, some of the ideas thrown around are dangerous to be suggesting without explaining the risks.
TA Leaders should not be implementing AI technology without first understanding how AI works, if it works, what its limitations are, and how to implement it purposefully and responsibly.
How does AI work?
While there are some open source AI projects available, many companies such as ChatGPT (OpenAI), Bard (Google), Bing (Microsoft), and MidJourney are operating in a black box environment. We won’t get to know the inner workings of their tools and what risks they may be opening companies up to.
While we don’t know how those products work, one can still understand how AI works at a macro level (1):
A simple example:
Does AI work?
Yes…and no.
For many tasks and prompts, AI can generate useful responses, outputs, and even provide helpful suggestions.
But, fundamentally an AI system is approximating its answer based on the data sets it was trained on. This accuracy is compounded by the fact that humans are writing the code and validating outputs, thus introducing their own biases in the system’s problem solving processes(2).
The solutions and answers an AI system produces should never be treated as inherently factual..
AI's limitations and pitfalls
These AI tools should not be used without a proper risk assessment and human oversight. If you’re not mindful of AI’s limitations and pitfalls, you’re opening yourself up to liability and expensive data privacy violations(3) and lawsuits(4).
Recommended by LinkedIn
Here are a 3 major AI limitations with example pitfalls in a TA process:
Be aware of Garbage In, Garbage Out (GIGO). If the AI system you want to use is trained on a bad dataset or a data set that doesn’t represent your company; can you rely on the outputs?
In their current iterations many of these AI tools are unreliable. OpenAI even admits this directly: “Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable” (5).
AI tools have been shown to provide false answers as well as fabricate information and facts. These are called hallucinations(5, 6).
Potential Pitfall example:
Because humans are creating the AI system and and validation testing, the programmers and program validators may introduce their biases into the system(1).
Potential Pitfall example:
Why is the system providing the answers that it does? If the AI tool you’re using is a black box, then you can never truly provide a reason for why it produced its answers/solutions/decisions.
The responsibility is on the employer to understand how their chosen AI implementations behave. The US Justice department is already advising on this issue stating:
Potential Pitfall example:
Decision framework to implement AI responsibly
I would expect any innovative TA team to be exploring how to utilize AI technology in its processes. Use AI tools to speed up productivity, boost creativity, and ease administrative burden, but ensure you have validation oversight in place. Be especially careful if you want to utilize AI tools for any decision making actions
You'll want to ensure your implementations are done purposefully and consider how your AI tool works.
Here is a framework of questions to use and build upon before choosing to implement an AI driven solution in your TA process.
Sources: