This audio is generated automatically. Please let us know if you have any comments.
New York City law restricting the use of artificial intelligence tools in the hiring process will go into effect early next year. Although the law is seen as an indicator of protecting job applicants from bias, little is known to date about how employers or suppliers must comply, which has raised concerns about whether the law is the right way to go to combat bias in hiring algorithms.
The law comes with two main requirements: employers must vet any automated decision tools used to hire or promote employees before using them, and they must notify candidates or employees at least 10 business days prior to their use. The penalty is $500 for the first violation and $1,500 for each additional violation.
Whereas Illinois has regulated the use of AI analytics of video interviews since 2020, New York City’s law is the first in the nation to apply to the entire hiring process. It aims to address concerns from the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice that “blind reliance” on AI tools in the hiring process could lead businesses to violate the Americans with Disabilities Act.
“New York City is holistically examining how hiring practice has changed with automated decision systems,” Julia Stoyanovich, Ph.D., professor of computer science at New York University and member of the ‘hasAutomated Decision Systems Working Group, HR Dive said. “This is the context in which we ensure that people have equitable access to economic opportunities. What if they can’t find a job, but don’t know why? »
Beyond the “model group”
AI recruiting tools are designed to support HR teams throughout the hiring process, from posting ads on job boards to screening candidates’ resumes to determining the right compensation package to to offer. The goal, of course, is to help companies find someone with the right background and skills for the job.
Unfortunately, each step of this process may be subject to bias. This is especially true if an employer’s “model pool” of potential candidates is judged against an existing roster of employees. In particular, Amazon had to scrap a recruiting tool — trained to assess candidates based on resumes submitted over a decade — because the algorithm has learned to penalize CVs containing the term “women”.
“You are trying to identify someone who you predict will be successful. You use the past as a prologue to the present,” said David J. Walton, partner at law firm Fisher & Phillips LLP. “When you look back and use the data, if the model group is predominantly white and male and under 40, by definition that’s what the algorithm will look for. How do you rework the group models so that the output is not biased?”
AI tools used to assess candidates during interviews or tests can also cause problems. Measuring speech patterns in a video interview can weed out candidates with a speech impediment, while tracking keyboard inputs can weed out candidates with arthritis or other conditions that limit dexterity.
Walton said these tools are akin to the “pull test” often given to candidates for firefighting roles: “It doesn’t discriminate on its face, but it could have a disparate impact on a protected class” of candidates such as defined by the ADA.
There is also a category of AI tools that aim to help identify candidates with the right personality for the job. These tools are also problematic, said Stoyanovich, who recently published a published an audit of two commonly used tools.
The problem is technical — the tools generated different scores for the same resume submitted as plain text versus a PDF file — as well as philosophical. “What is a ‘team player?’ “, did she say. “AI is not magic. If you don’t tell it what to look for and validate it using the scientific method, then predictions are no better than a random guess.
Legislation — or stricter regulations?
The New York City law is part of a larger trend at the state and federal levels. Similar provisions were included in the US Federal Privacy and Data Protection Act, introduced earlier this year, while the Algorithmic Accountability Act would require “impact studies” of automated decision systems with various use cases, including employment. Additionally, California is to add responsibility related to the use of AI recruiting tools to state anti-discrimination laws.
However, there are concerns that legislation is not the right way to approach AI when hiring. “New York City law doesn’t mandate anything new,” Scherer said. “The disclosure requirement isn’t very meaningful, and the audit requirement is just a narrow subset of what federal law already requires.”
Given the limited guidelines issued by New York City officials before the law took effect on January 1, 2023, it’s also unclear what a technology audit looks like. — or how it should be done. Walton said employers will likely need to partner with someone who has expertise in data and business analytics.
On a higher level, Stoyanovich said AI recruiting tools would benefit from a standards-based audit process. Standards should be discussed publicly, she said, and certification should be done by an independent body — whether it’s a non-profit organization, government agency, or other entity that makes no profit from it. Given these needs, Scherer said he believes regulatory action is preferable to legislation.
The challenge for those working for tighter regulation of these tools is to get policymakers to lead the conversation.
“The tools are already there, and the policy is not keeping pace with technological change,” Scherer said. “We are working to ensure that policymakers are aware that there must be real requirements for audits on these tools, and that there must be meaningful disclosure and accountability when the tools lead to discrimination . We have a long way to go.”