Contents
Automated decision-making technologies (ADMTs) are useful across many sectors from employment and healthcare to education and finance. However, given their evident utility in high-impact contexts, careful attention and consideration must be paid to the role they play in driving or assisting with human decision-making processes. Otherwise, we risk leveraging these tools in ways that perpetuate discrimination and bias, which won’t only result in harmful consequences for those affected by them, but also for those who actively use them—a company that fails to address discriminatory outputs might compromise its reputation or be required to pay substantial compliance costs.
Regulating automated decision-making technologies is obviously challenging. The ability to utilize ADMTs in high-impact scenarios, especially when coupled with their heavy reliance on data—specifically historical data—complexifies the process of understanding and anticipating the adverse impacts they could generate. In other words, regulators need to ask themselves what an effective regulatory strategy would look like in light of potential novel use cases and the increasingly prevalent role that data plays in human decision-making, particularly throughout high-impact domains. Also, it’s worth noting that in the US, this process is further complexified by the fact that different states have different legislative interests.
In this respect, the US regulatory strategy is inconsistent. Some states like California, Colorado, and Connecticut are taking a more horizontal approach to this issue whereby ADMTs are regulated throughout their life cycle and/or across domains. Other states like New York, Illinois, Massachusetts, and Maine have adopted a vertical strategy favoring ADMT regulation in specific contexts like employment—New York City Local Law No. 144 (NYC Local Law 144), which is the subject of this post, is one such example.
Consequently, we’ll begin this discussion by providing a high-level overview of NYC Local Law 144 after which we’ll break down the bias audit requirements it proposes. For readers interested in further exploring the AI policy landscape, we invite you to follow Lumenova AI’s blog, where you can keep up with the latest AI regulation developments and insights.
Overview
NYC Local Law 144 is specifically designed to address the use of ADMTs in employment contexts, classifying them as automated employment decision tools (AEDTs), which are defined as:
- “Any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
In essence, the law places a regulatory responsibility upon employers who must certify that prospective job applicants and active employees are adequately protected from the potential adverse impacts that AEDTs could generate, mainly throughout hiring and promotion procedures. Employers must also ensure that certain kinds of AEDT information, such as the date the AEDT is deployed and the number of applicants categorized as unknown by an AEDT assessment, are made publicly available on the employer’s website.
More concretely, employers must undergo a bias audit—we’ll discuss this at length in the next section—that evaluates whether an AEDT can promote or sustain discriminatory decision outcomes, and when AEDTs are leveraged for applicant screening procedures, employers must:
- Notify applicants of AEDT use within no more than 10 business days of the tool being used.
- Allow applicants to request alternative screening procedures that don’t involve AEDTs.
- Ensure applicants receive an explanation of the AEDT’s assessment criteria within no more than 10 business days of AEDT use.
- Upon request, provide applicants with detailed information outlining the employer’s data retention policy, AEDT’s training data, and data lineage. Applicants must formally submit such requests in writing and employers must fulfill them within 30 business days of submission.
As for enforcement, employers who fail to comply with NYC Local Law 144 will face significant penalties seeing as violations are evaluated on a day-by-day basis—two identical violations that occur on different days will warrant two separate penalties. Importantly, employers are also required to notify employees and applicants of violations when they occur, and failing to do so will itself constitute a violation.
Moreover, under NYC Local Law 144, initial violations—and any additional violations that occur on the same day as the initial violation—won’t incur penalties of more than $500, whereas subsequent violations will warrant penalties ranging between $500 and $1,500. Since violations are gauged on a day-by-day basis, organizations must tread lightly, otherwise, they risk allowing them to accumulate, which could eventually result in severe overall compliance costs, possibly even crippling ones for smaller organizations.
The Bias Audit
The AEDT bias audit is the most important requirement outlined in NYC Local Law 144. Consequently, and for clarity’s sake, we’ll break down this section into two parts: 1) bias audit obligations for employers and auditors, and 2) the primary components and overall structure of the bias audit.
In terms of bias audit obligations, employers and auditors must adhere to several requirements, which are detailed below:
- Employers must ensure that bias audits are administered by an independent auditor. For such auditors to qualify as “independent” they must:
- Not be involved in any part of the AEDT lifecycle (other than the audit process).
- Not have an active employment relationship with the employer in question while the bias audit is administered.
- Not have an active employment relationship with the AEDT developer or distributor.
- Not possess or display any vested financial or material interests in the employer leveraging the AEDT or the AEDT vendor.
- Employers must conduct regular yearly bias audits. If more than one year has elapsed since the last bias audit was administered, an employer must promptly cease AEDT use, whereas if an employer intends to deploy an AEDT, they must ensure that it’s undergone a bias audit one year before it’s put into use.
- Auditors must ensure that bias audits center on historical AEDT data, which should either be acquired from employers or employment agencies using the AEDT in question. In cases where an employer has never leveraged an AEDT or already provided the auditor with their historical AEDT data, they may rely on the bias audits administered by other employers or employment agencies leveraging the same AEDT.
- Where historical data is insufficient, employers can rely on bias audits that utilize test data. However, in these cases, employers must provide a summary of bias audit results, an explanation of why historical data wasn’t used, and a description of how test data was created and acquired.
- Before using an AEDT, an employer must make the following AEDT information publicly available on their website:
- A clear summary of the latest bias audit results and the date it was performed. These results must remain on the employer’s website for at least 6 months following the last AEDT use.
- A clear explanation detailing the lineage of the data used to administer the bias audit and an explanation of the data itself.
- A clear description outlining the main components of the bias audit.
Now that we’ve covered employer and auditor bias audit obligations, we can examine the structure and components of the bias audit in more detail. In this respect, we’ll begin by defining some of the core components of AEDT audits, which are described below, with examples:
- Selection rate: the rate at which prospective applicants or employees in a given category are chosen to advance in the hiring process or are classified by an AEDT into some other category. For example, if out of 50 applicants of Indigenous origin, 10 are selected to advance in the hiring process, the selection rate for Indigenous applicants would be equivalent to 20%.
- Median score: the median score of the sample size of prospective applicants or employees subject to AEDT evaluation. For example, if you had 10 applicants for a job, three of which receive a score of 4, two of which receive a score of 5, 4 of which receive a score of 7, and one of which receives a score of 10, your median score would be 6.
- Scoring rate: the rate at which prospective applicants or employees are given a score that surpasses the median score of the sample size. For example, if we use the same distribution as the previous example, the scoring rate would be equivalent to 50%, that is, half of the applicants have a score greater than 6.
- Impact ratio: the ratio obtained by dividing either 1) the selection rate of some category by that of the most popular category or 2) the scoring rate of some category by that of the category with the highest scores. For example, if we have a selection rate of 20% for Indigenous applications and a selection rate of 60% for White applicants, our impact ratio for Indigenous applicants would be .33, whereas our impact ratio for White applicants would be 3.
Moving forward, each bias audit that is administered must ensure that impact ratios are holistically calculated for all pertinent categories of sex, race/ethnicity, and intersectionality and that the number of individuals who are categorized as “unknown” by an AEDT—those that fall outside the AEDT assessment criteria—are documented. However, not all AEDT bias audits will adopt the same structure. For instance, in cases where an AEDT is leveraged for candidate selection, promotion, or group classification, a bias audit should:
- Determine the selection rate for each category, especially for sex, race/ethnicity, and intersectionality categories.
- Determine the impact ratio for each category, especially for sex, race/ethnicity, and intersectionality categories.
- If candidates are classified into groups by an AEDT, such as by reference to a desired trait like “company culture fit”, selection rates and impact ratios must be independently calculated for each group.
Alternatively, in cases where an AEDT is used solely for candidate scoring or promotion, a bias audit should:
- First determine the median score for the entire sample size, followed by those of sex, race/ethnicity, and intersectionality categories.
- Determine the scoring rate for each category, particularly those of sex, race/ethnicity, and intersectionality.
- Determine the impact ratio for each category, particularly those of sex, race/ethnicity, and intersectionality.
- If a given category comprises less than 2% of the data leveraged to administer the bias audit, the auditor may decide to exclude this data from the audit. However, if this occurs, an explanation justifying the auditor’s decision must be provided, and the number of applicants that fall in this category as well as their respective scoring rates, must be described.
Conclusion
Despite its narrow scope, NYC Local Law 144 could deeply influence the US AI policy landscape. As ADMTs are more widely integrated across high-impact domains, core responsible AI (RAI) principles like fairness, accountability, and transparency will become even more crucial than they are already—ADMTs can dramatically improve and streamline human decision-making processes, but this will come at a cost, especially when considering these tools’ reliance on historical data. If we want to ensure that ADMTs aren’t leveraged in ways that inadvertently perpetuate discrimination and enforce systemic biases, we need to have certain mechanisms in place, like bias audits, that evaluate and consider the harmful impacts these technologies could generate from a holistic and independent perspective.
Moreover, as we discussed in the introduction, ADMT regulations are popping up all over the US, and while only a few have passed the drafting stage, those that are enacted early on, particularly in states with major cities like New York and California, will lay the standardized groundwork for the US ADMT regulation ecosystem.
Fortunately, tracking the latest developments and advancements in the AI regulation landscape doesn’t need to be a walk in the dark. In this respect, we invite readers to check out Lumenova AI’s blog, where you can access several resources examining various kinds of AI legislation from descriptive, analytical, predictive, and/or global perspectives.
Alternatively, for those who are interested in initiating concrete AI governance and risk management protocols, consider trying out Lumenova AI’s RAI platform, and book a product demo today.