AI Makes the Recruitment Mistake and Humans Face Legal Challenges

AI Makes the Recruitment Mistake and Humans Face Legal Challenges
Published on

New York City sanctioned a law that prohibits employers from availing AI hiring tools

Artificial intelligence is all the rave now. As we are living in a technology-driven world every day we come across something exciting about AI on the internet. New York City to require bias audits of AI-type HR technology is the trending news internet has.

New York City has sanctioned a first-of-its-kind law that prohibits employers from availing AI and algorithm-based technologies for recruiting, hiring, or promotion without these AI tools first being audited for bias. New York City Mayor Bill de Blasio passed the legislation to upgrade into law without a signature on Dec. 10. It will appear to effect from Jan. 2, 2023 and applies only to select to screen candidates for employment or employees for promotion who are residents of New York City, but it is a signal of things to come for employers across the country.

If New York City employers are "availing an AI-type HR technology tool, whether it's a pre-employment assessment or video interviews scored by AI or some other selection AI tool, it is probably subject to this new ordinance," uttered Mark Girouard, an attorney in the Minneapolis office of Nilan Johnson Lewis who, as a segment of his practice, counsel employers on pre-employment assessments. He said that "they will require to start engaging a third party to conduct bias audits of these AI-type HR technology tools to judge the tool's disparate impact—a neutral policy that could conduct to discrimination— based on race, ethnicity or sex."

The law describes automated employment decision tools as "any computational process, generated from machine learning, statistical modeling, data analytics, or artificial intelligence," that counts, classify, or else makes a recommendation regarding candidates and is availed to assist or change an employer's decision-making process. "The definition is very broad," Girouard stated. "It's not understandable if the statute captures only the pure AI tools or sweeps in a broader set of selection tools. If an employer utilizes a traditional pre-employment personality test, for example, which is scored by an algorithm based on weighting and a combination of components, it could be included—we're not certain," he said.

Matthew Jedreski, an attorney in the Seattle office of Davis Wright Tremaine and a member of the firm's artificial intelligence group, said that the law might "capture innumerable technologies processed by many employers, which includes software that sources candidates, performs initial resume reviews, helps rank applicants or tracks employee performance."

Provisions of the Law

Under the law, employers will be prohibited from using an AI tool to screen job candidates or judge potential employees unless the technology has been audited for bias no more than one year before its use and an abstract of the audit's results has been shown publicly accessible on the employer's website. Girouard stated that it is still not clear when and how often the bias audit would require to be updated and whether the audit is meant to cover the employer's hiring process in conjunction with the AI tool, or the tool itself more generally. Employers that break to comply may be subject to a fine of up to $500 for a first violation and then penalized by fines between $500 and $1,500 per day for each subsequent violation.

Frida Polli, co-founder, and CEO of Pymetrics, a talent matching platform that uses behavioral science and AI, is one of the most vocal supporters of reducing bias in technology. Her company works to make sure that the AI tool's algorithms do not have any disparate impact. "We have a system that the algorithms work on before they are built that ensures that they are above the threshold that constitutes disparate impact," she mentioned. "We test for that and continue to monitor it once it is deployed."

'A Good Step'

The approved final version of the New York City law came up with multiple varieties of responses, even among proponents of huge scrutiny of AI technology who had advocated for it from the beginning. Polli is supporting the law and calls it "a good step in the right direction." Several beneficial elements are added, she said, including provisions on candidate notification, transparency regarding what data is being evaluated, and testing for disparate impact.

Julia Stoyanovich, a professor of computer and data science at New York University and the founding director of the school's Center for Responsible AI, also appreciated the law by calling it a "substantial positive development," especially the disclosure components informing candidates about what is being done. "The law supports informed consent, which is critical, and it's been utterly lacking to date," she mentioned. "And it also supports at least a limited form of recourse, allowing candidates to seek accommodations or to challenge the process."

'Deeply Flawed'

But many digital rights activists showed disappointment with the final legislative product. The Center for Democracy and Technology (CDT) in Washington, D.C., mentioned it as a "deeply flawed" and "weakened" standard that doesn't go far enough to curb AI technology bias in employment. "The New York City bill could have been a model for jurisdictions around the country to follow, but instead, it is a missed opportunity that fails to hold companies accountable and leaves essential forms of discrimination unaddressed," uttered Matthew Scherer, senior policy counsel for worker privacy at CDT.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net