A New Era in AI Hiring Regulations
New York City has recently taken a significant step in regulating the use of Artificial Intelligence (AI) and automated decision-making tools in the hiring process. The city has adopted new rules and guidelines that govern how these tools are used by businesses and employers when making hiring decisions. This article aims to provide a comprehensive overview of these new regulations, their implications, and their potential impact on the future of AI in hiring.
Overview of the New Rules and Guidelines
Legal Framework: Local Law 189
The new regulations stem from Local Law 189, which was enacted in December 2021. Local Law 189 mandates that businesses using automated decision-making tools for employment decisions must disclose their usage and provide a description of how the tools work. The law also requires these businesses to conduct annual audits to ensure their tools are not discriminating against job applicants.
Final Rules Established by the NYC Commission on Human Rights
The New York City Commission on Human Rights (NYCCHR) was tasked with developing the specific rules and guidelines for implementing Local Law 189. The final rules were published on March 3, 2023, and went into effect on April 3, 2023. Key aspects of the final rules include:
- Definition of Automated Decision-Making Tools: The rules define an automated decision-making tool as any system or model that uses AI, machine learning, or statistical methods to assist or replace human decision-making in hiring.
- Disclosure Requirements: Employers must notify applicants, in writing, of their use of automated decision-making tools during the hiring process. This notification must include a plain-language description of how the tool works and the characteristics it evaluates.
- Annual Audits: Employers must conduct yearly audits to evaluate the potential discriminatory impact of their automated decision-making tools. The audit must assess whether the tool disproportionately excludes applicants based on protected characteristics, such as race, gender, or age.
- Recordkeeping Requirements: Employers must maintain records of their automated decision-making tools, including their descriptions, annual audits, and any modifications made to the tools.
- Remediation: If an audit reveals that an automated decision-making tool is discriminatory, employers must take corrective action to address the issue. This may involve modifying the tool, implementing alternative methods, or discontinuing its use.
The Importance of Compliance
Failure to comply with the new rules can result in significant consequences for businesses. The NYCCHR has the authority to investigate potential violations, impose fines, and order employers to take corrective action. In addition to the legal penalties, non-compliant businesses may also face reputational damage, as consumers increasingly prioritize ethical and inclusive hiring practices.
Potential Impact on the Future of AI in Hiring
These new regulations signal a growing trend toward increased oversight and accountability in the use of AI and automated decision-making tools in hiring. Employers must now navigate this evolving legal landscape and ensure that their AI hiring tools are not only effective but also fair and compliant with regulations.
Preparing for a New Regulatory Landscape
As New York City adopts these final rules for automated decision-making tools in AI hiring, employers must adapt their practices to comply with the new regulations. By understanding the requirements and taking proactive steps to ensure their AI tools are fair and unbiased, businesses can not only avoid potential legal consequences but also demonstrate their commitment to ethical and inclusive hiring practices.
To prepare for this new regulatory landscape, employers should consider the following steps:
- Evaluate existing AI hiring tools: Review your current AI and automated decision-making tools to determine if they fall under the purview of the new regulations. Assess their potential impact on job applicants, particularly those with protected characteristics.
- Implement disclosure measures: Develop clear and concise written notifications for job applicants, informing them of the use of automated decision-making tools in the hiring process and providing an accessible explanation of how these tools work.
- Establish audit procedures: Set up a framework for conducting annual audits of your AI hiring tools to assess their potential discriminatory impact. This may involve engaging external experts or developing internal expertise in AI fairness and bias.
- Maintain thorough records: Keep comprehensive records of your AI hiring tools, their descriptions, annual audit results, and any modifications made. These records will be crucial for demonstrating compliance with the new regulations.
- Develop remediation strategies: Be prepared to take corrective action if your AI hiring tools are found to be discriminatory. This may involve refining the tool’s algorithm, implementing alternative selection methods, or discontinuing the use of the tool altogether.
By proactively addressing these new regulatory requirements, businesses can continue to leverage the power of AI in hiring while promoting a fair and inclusive environment for all job applicants.