I. Introduction
Protection against “algorithmic discrimination” is front and center in our federal government’s current framework for AI.1 As noted in my Visual Guide to the U.S. Federal Regulation of AI, there’s seemingly no significant federal agency that does not have a regulatory dog in this fight.
Of all the intractable issues that AI gives rise to, algorithmic discrimination is in my view the most unsolvable. It resides at the convergence of:
- civil rights and race, gender, and other relationsโthe biggest political football(s) of our times;2 and
- how AI worksโe.g., it is trained on data that itself is biased.
Federal regulatory agencies charged with combatting algorithmic discrimination in the President Biden Oct. 2023 Executive Order on AI
Any hope for “solving” the latter would be through the application of data science and statistics. Which is made entirely impossible by the political forces at play.
“There are three kinds of lies: Lies, Damned Lies, and Statistics.” – Mark Twain
Thankfully cutting this algorithmic Gordian knot is not the bar that companies providing or implementing generative AI services need clear. Practically speaking, all you will need to do is comply with the applicable laws and regulations to come.
This article focuses on one of the most important manifestations of algorithmic discriminationโthe use of AI in automated employment decision tools
II. The law and regulation on disparate impact
All efforts to regulate against algorithmic discrimination stem from the disparate impact theory of discrimination. Congress, the courts, and regulatory agenciesโnamely the Equal Employment Opportunity Commission (established by Congress to enforce Title VII of the 1964 Civil Rights Act)โhave incorporated disparate impact concepts into various laws and regulations.
EEOC laws cover most employers with at least 15 employees.3
A. The traditional disparate impact analysis
Disparate impact cases typically involve the following questions, based on the Supreme Court’s seminal opinion in Griggs v. Duke Power Co.:
- Does the employer use a particular employment practice that has a disparate impact on the basis of race, color, religion, sex, or national origin?
- If the selection procedure has a disparate impact based on race, color, religion, sex, or national origin, can the employer show that the selection procedure is job-related and consistent with business necessity?
- If the employer shows that the selection procedure is job-related and consistent with business necessity, is there a less discriminatory alternative available?4
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
B. The application of disparate impact to automated employment decision tools using AI
In May 2023, the EEOC published updated guidance on automated employment decision tools using AI.5 The EEOC guidance specifies that employers that use AI hiring decision tools can be liable for disparate impact violations, even if the tools are developed by third party vendors.6
The EEOC further notes: “One advantage of algorithmic decision-making tools is that the process of developing the tool may itself produce a variety of comparably effective alternative algorithms.”
And concludes: “The EEOC encourages employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently. Generally, employers can proactively change the practice going forward.”7
The EEOC’s analysis does not entail opening up of the black box of these AI employment decision tools. It does not directly assess how they work. It only looks solely to and proscribes any disparate impact in AI output.
This is both logically and deeply flawed. The old adage “the devil is in the details” comes to mind. As does “when you have a hammer, an awful lot of things start looking like a nail….”8
C. The NYC law on automated employment decision tools
The first proving ground for this modern version of MLK’s ideal arises not out of the federal or state levels. New York City’s Local Law 144 on automated employment decision tools was enacted in December 2021, with enforcement starting this past July.
It requires all NYC employers to disclose and audit any use and output of AI-based automated employment decision tools,9 echoing the EEOC’s May 2023 AI-output-focused guidance.
The apparent rationale is that “if employers are forced to measure and disclose the fairness of their algorithmic hiring and promotion systems, they will be incentivized to avoid building or buying biased systems.”10
The law is subject to numerous critiques.11 This is no surprise and not only because of the politics involved. The issues are impossibly complex, starting with the technological equivalent of original sin embedded in AI itself.
III. The problems with algorithmic discrimination bias audits
Algorithmic discrimination will not be a stand-alone or primary claim in most cases. It will be a secondary claim, delved into primarily to support a broader claim of actual employment discrimination.
The EEOC is authorized to independently investigate possible discrimination (presumably now also including the algorithmic kind) under Title VII. But for anyone but for the really big fish, the most likely way an algorithmic discrimination bias audit might come up against your average company is if a former employee files:
- a federal claim of discrimination with the EEOC against you, and/or
- a (subsequent) federal private job discrimination lawsuit against your employer,12 13
The problem of discrimination, algorithmic or otherwise, cannot be solved, by regulation or otherwise. It can only be mitigated against, and only in ways that will always be subject to fair criticism and then some.14 It is part and parcel of the broader Sisyphean struggle against discrimination itself.
There are a myriad reasons, starting with the two outlined below.
A. Bias in, bias out
AI replicates the human mind and analyzes data by classifying it. It does so but identifying similarities and differences between data, i.e., by discriminating.
If there is an overrepresentation of one group over another for a given factor that is deemed important for hiring decision-making in the real world, then this bias will be reflected in the data. And the use of such biased data to train the model will result in model outputs that also reflect such bias.
The main way to mitigate against bias in data is through the application of data science in the collection of data or the application of data science/statistics to “remove” bias during the processing of the data.
If such efforts to “de-discriminate” are not, however, standardized, then the outputs of individual efforts cannot be meaningfully compared.15
And the underlying politics will inevitably shed far more heat than light on the subject (presuming they let through any light to begin with).
B. And what exactly are we supposed to do with the results…?
The elephant in the room is: How much disparate impact is too much disparate impact? And worse, how much is acceptable?
The EEOC’s May 2023 Select Issues document provides guidance-less guidance on the “four-fifths rule,” a general rule of thumb for determining whether the selection rate for one group is “substantially” different than the selection rate for another group.16 It is a “practical and easy-to-administer” test that may be used to draw an initial inference that the selection rates for two groups may be substantially different, and to prompt employers to acquire additional information about the procedure in question.”17 The four-fifths rule, however, “is not always appropriate, especially where it is not a reasonable substitute for a test of statistical significance.”18 It further punts on anything else on this issue.19
V. Conclusion
NYC Local Law 144 “requires employers and employment agencies to do a bias audit; however, the Law does not require any specific actions based on the results of a bias audit.”20 Should, however, a NY court ultimately find the use of an automated employment decision tool to be in violation, the court can impose a civil penalty amounting to up to $500 for a first violation and up to $1,500 for each subsequent violation.21 Time will tell how the federal or other state governments addresses these issues.
Your primary line of defense against liability for algorithmic discrimination is to first mitigate against job discrimination claims. The standard guidance applies: employ compliant non-discriminatory hiring, employment, and termination practices in general. And mitigate against hostile environment claims through training and monitoring, etc.
But the reality is that whether or not you become subject to individual litigation for employment discrimination is at least in part out of your control. Same applies with respect to whether or not yet become subject to government regulatory actions. But control what you can control. Stay up to date with EEOC guidance and Title VII case law and the law of your governing state(s), and update your policies and procedures accordingly.
And the NYC law provides a way for employers to be on the current forefront of this issue. You could consider sending a questionnaire to your automated employment decision tool vendor annually, such as that developed by the Data & Trust Alliance,22 and incorporating the results into your vendor selection/renewal process.23 But should you do so, whether because under legal obligation or voluntarily, think carefully about your process for navigating the results and consider seeking legal counsel.
ยฉ 2024 Ko IP & AI Law PLLC
- See President Biden’s Executive Order on AI, available here, and Blueprint for an AI Bill of Rights, available here. โฉ๏ธ
- Race relations is actually a subject near and dear to my heart, from my past life as a U.S. history major with a focus on the 1960s and the civil rights movement. But while I am generally sympathetic to the view that “race always matters” and am beyond troubled by the current anti-diversity, equity, and inclusion (DEI) movement, the focus of this blog article will be on the technological and legal compliance issues here. โฉ๏ธ
- U.S. Equal Employment Opportunity Commission, Overview, available here. โฉ๏ธ
- Id.; 401 U.S. 424 (1971). โฉ๏ธ
- U.S. Equal Employment Opportunity Commission, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, May 18, 2023, available here. โฉ๏ธ
- Id. (question 3). โฉ๏ธ
- Id. (question 7). โฉ๏ธ
- For good and for bad on this front, at the same time the emergence of AI is causing existential crises across numerous professions, our federal government’s executive branch is facing its own existential crisis in Securities Exchange Commission v. Jarkesy, a case being heard by the U.S. Supreme Court. The very ability of executive agencies including the EEOC to impose fines is at issue. See Noah Rosenblum, The Case That Could Destroy the Government, The Atlantic, Nov. 27, 2023, available here. One might think the EEOC’s rule making authority is on more solid ground than that of other agencies given the express purpose for which it was founded and the long-standing U.S. Supreme Court precedent on which the disparate impact theory is based. But discretion is probably the better part of valor with respect to any predictions on such matters these days…. โฉ๏ธ
- Automated Employment Decision Tools: Frequently Asked Questions, NYC Consumer and Worker Protection, available here (“At a minimum, an independent auditorโs evaluation must include calculations of selection or scoring rates and the impact ratio across sex categories, race/ethnicity categories, and intersectional categories.”). โฉ๏ธ
- Jacob Metcalf, What federal agencies can learn from New York City’s AI hiring law, The Hill, December 17, 2023, available here. โฉ๏ธ
- See id.; see also Daniel Schwarz & Simon McCormack, Biased Algorithms Are Deciding Who Gets Hired. We’re Not Doing Enough to Stop Them, ACLU of New York, available here. โฉ๏ธ
- All of the laws enforced by the EEOC including Title VII require individuals to file a Charge ofย Discrimination with the EEOC before filing a private job discrimination lawsuitย against employer. See Filing a Charge of Discrimination, U.S. Equal Employment Opportunity Commission, available here.
โฉ๏ธ - A claim under a state discrimination law, in particular in a state that has adopted an algorithmic discrimination bias audit requirement or guidelines such as NYC Local Law 144, is also a possible avenue. โฉ๏ธ
- See the history of affirmative action. โฉ๏ธ
- See, e.g., Neil Raden, The disparate impact metric falls short for fairness in algorithmic models – here’s why, diginomica, July 13, 2023, available here (discussing inherent flaws in the application of a disparate impact metric that is “a ratio of a ratio and doesn’t consider the relative sizes of the group”). โฉ๏ธ
- U.S. Equal Employment Opportunity Commission, Select Issues, supra note 5 (question 6). โฉ๏ธ
- Id. โฉ๏ธ
- Id. โฉ๏ธ
- Id. (“This document does not address other stages of the Title VII disparate impact analysis, such as whether a tool is a valid measure of important job-related traits or characteristics.”). โฉ๏ธ
- Automated Employment Decision Tools: Frequently Asked Questions, supra note 9. โฉ๏ธ
- See Local Laws of the City of New York for the Year 2021, No. 144, Sect 20-872 Penalties (“…liable for a civil penalty of not more than $500 for a first violation and each additional violation occurring on the same day as the first violation, and not less than $500 nor more than $1,500 for each subsequent violation.”), available here. โฉ๏ธ
- Algorithmic Safety: Mitigating Bias in Workforce Decisions, The Data & Trust Alliance, available here. โฉ๏ธ
- Guidance for automated employment decision tool vendors for responding to such questionnaires and more broadly for mitigating against being liable for algorithmic discrimination falls outside the scope of this article (and my current ability to wrap my head around the issue….). For some additional insights, see David Essex, Why algorithmic auditing can’t fully cope with AI bias in hiring, Tech Target, Aug. 26, 2021, available here. โฉ๏ธ
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.