Conversations about artificial intelligence (AI) have been everywhere recently. Congress held hearings about it and news outlets have written articles about it, including ways in which content providers have sampled AI’s abilities by having it draft something for them. But how does it impact companies and human resources?
According to the Pew Research Center, 62% of Americans believe AI will have a major impact on workers but only 28% believe it will impact them directly. Unfortunately, it is already impacting employees as 4,000 jobs were lost in May 2023 due to AI, the first time AI was listed as a reason for a layoff.
While most of the recent conversation involves AI-generated content, other AI formats have been used in the workplace for some time.
AI in the Hiring Process
AI tools used in the hiring process have been praised for saving hiring managers valuable time and creating a diverse pool of applicants by removing bias from the initial review process. Concerns have been raised that there is unintentional bias built into these tools, however.
Resume review tools can use predictive analysis to determine what candidate profile would be the best fit for an open position and then compare received electronic resumes to find the “best available” candidates. However, if a candidate uses certain words or phrases that may not fit the AI tool’s expectations, the candidate will receive a lower evaluation for no real reason.
More concerning are tools which analyze an applicant’s personality, knowledge and communication skills using recorded responses to interview questions and facial expressions. These tools assess a candidate’s fit for a job by matching them to a profile of the company’s “ideal employee” using appearance, communication skills, speech patterns, body language and personality. Some of these tools have been found to be biased however, eliminating people of certain genders, races, ethnicities and disabilities by giving lower scores for factors (such as facial structure, accents, hair style, or wearing glasses or head scarves) that do not match the “ideal” parameters in the programming.
Regulations on the use of these tools are already in place. On April 25, 2023, four federal agencies — the U.S. Equal Employment Opportunity Commission (EEOC), U.S. Department of Justice (DOJ), Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC) — issued a joint statement addressing the concerns about AI and its potential impacts. The statement covered several topics including the definition of AI, its potential positive uses and negative impacts, and potential areas for discrimination. Additionally, the statement affirmed each agency’s commitment “to monitor the development and usage of automated systems and promote responsible innovation,” as well as “pledge to vigorously use collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
Illinois, Maryland and New York City have already passed laws regulating the use of “automated employment decisions tools” in the hiring process, with many other states and cities considering similar laws.
AI-Generated Content
Most AI news at this time is around chatbots and the AI-generated content they produce. Chatbots such as OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard are now available to the public by simply downloading the software or phone app and setting up an account.
In the workplace, chatbots can be used to research topics and generate content such as policies, procedures, emails, letters and disciplinary action. On the positive side, AI used for HR purposes can help effectively address legalities, uncomfortable topics and messages for general audiences. However, AI has also been shown to generate content that lacks empathy, is non-specific, disregards the privacy of others, does not offer face-to-face interactions or contradicts itself. Asking the same question in different ways could give different results, which could complicate or confuse the issue more.
Beyond these concerns are the inherent limitations of chatbots as they are built on a large language model, which relies on many available data sources. The end results are only as good and valid as the data it references however, which is not always valid or accurate. For example, Wikipedia is an often-used resource but because it relies on user-generated content, it has been proven to be only 80% accurate. In some cases, chatbots have also created their own inaccurate reference material from which to develop and validate an answer even though it is incorrect or fictional.
To build their database, chatbots retain all entered information for future reference by any user. Since users must input specific information to get the best results, they may need to enter sensitive or confidential information or trade secrets which are then added to the chatbot’s database. Depending on the information entered and/or the parameters entered by a future user, companies may find their confidential data available to anyone asking the right questions.
Suggested Action to Take Before Using AI
As tools develop and improve, AI will find a place in most workplaces. As you determine how AI will be allowed in your workplace, consider taking the following actions:
- Do your research into AI: Understand what defines AI as well as the advantages and drawbacks to each tool. In addition to the information linked in this article, consider reading other resources to educate yourself as much as possible on AI. For example, two articles about chatbots from the NY Times I found helpful while researching this article are “Chatbot Primer” (5-part series) and “Prompts for More Effective Chatbot Results.” You may also find Conductor’s article about using a chatbot helpful.
- Research your AI tools: Learn what AI is and how it is incorporated into tools you may use now or may rely on in the future. If you choose to use AI tools, be sure to understand their validity and limitations. For example, if you are going to use virtual analysis of recorded interviews, understand the science behind it, including if the tool has been properly tested to remove implicit biases.
- Establish policies and procedures on AI use: Draft a policy to outline when and how AI can and cannot be used at your company. Include clear statements prohibiting discrimination and revealing confidential information. While the policy can be general to cover any AI, develop exact procedures and expectations as you initiate AI tools.
- Train employees and managers: As you expand the use of AI tools at your company, train your employees and managers when and how to use them properly and legally. Instruct users on what is and is not allowed as well as expectations such as reviewing and fact-checking all content before releasing it or personalizing a letter to an employee or customer.
The author and Affinity HR Group will continue to monitor this emerging technology and the related regulatory around its development and use.