Ko IP & AI Law PLLC

Arizona patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement your AI, and mitigate the risks.

“Because I’m not white and I’m a girl…”: On the AI road to autonomous discrimination? [Part 2 of 2]

I. “…on the basis of sex…”

See Part 1 here.

II. โ€œR-E-S-P-E-C-T. Find out what it means to meโ€ฆ.โ€

See Part 1 here.

III. The Old-Boysโ€™ Network 2.0: Discrimination in the Age of AI

See Part 1 here.

A. The glass ceiling endures: Progress without parity

See Part 1 here.

B. AI is a poison for sureโ€ฆ

See Part 1 here.

C. โ€ฆ and a cure, perhaps….

Having said all of this, I would still take a properly (i.e., responsibly ) AI trained referee in a heartbeat over a human one for the specific context of a boys basketball game that my daughter is playing in. A responsibly trained AI would include affirmatively placing โ€œguardrailsโ€ to mitigate this or any other bias.

Such โ€œguardrailsโ€ could include:

  • training only on โ€œtrustedโ€ referees that are separately evaluated as being more objective and accurate with their calls,
  • specifically constraining the model from considering race or genderโ€”and other variables correlated with race or genderโ€”as correlative variables that can be used to train the model, and
  • specifically having the model consider race or gender as correlative variables that can be used to train the model, comparing the foul rate between races and genders, and assessing whether discrimination against particular groups is present, and if so, determining how to modify model development to mitigate this. 

Whatever biases that might remain should pale in comparison with those latent in human referees, in the same way that autonomous vehicles will likely still get into some accidents but will be far safer than human drivers overall.

D. โ€ฆIF, and only if, properly regulated.

The following is guidance provided to federal judges for evaluating whether appropriate steps have been taken by an AI provider to mitigate against bias (i.e. algorithmic discrimination) in the implementation of its AI system.

[cont’d โ†—]

Ko IP & AI Law PLLC logo

With AI as with people, some bias is always present. But steps can be taken to minimize the risk. One mitigator is sound processโ€”timely, contextual, and meaningful. For  policymakers and engineers, โ€œtimelyโ€ means at points where input can directly in๏ฌ‚uence outcomes, i.e., at the conception, design, testing, deployment, and maintenance phases  of AI development and use. โ€œContextualโ€ means speci๏ฌc to the tool and use in question and with actual knowledge of its purposes, capabilities, and weaknesses. โ€œMeaningfulโ€ means independent, impartial, and accountable. Speci๏ฌcally, the person using or designing an application should validate its ethical design and use. If a particular community or group of people is likely to be affected by the use of the tool, designers and policymakers should consult with that community or group in deciding whether and how to develop, design, or use it. In addition, to the extent feasible, the systemโ€™s parameters should be known, or retrievable. The system should be subject to a process of ongoing review and adjustment. The rules regarding the permissible use, if any, of social identifying descriptors or proxies should also be enunciated, clear, transparent, and subject to constitutional and ethical review. For judges and litigators, sound process also means the careful application of the Rules of Evidence to AI-generated evidence and tools on the record. – An Introduction to Artificial Intelligence for Federal Judges1

Transparency is the factor that gets the most attention and perhaps deservedly so. In particular given the purported zero-sum game between this inaccuracy or bias-mitigation prerequisite and the trade secret rights of the LLM and AI providers.2

But the key question for probing for bias in AI systems in my estimation is: โ€œWere stakeholdersโ€”groups likely to be a๏ฌ€ected by the AI applicationโ€”consulted in its conception, design, development, operation, and maintenance?โ€3

Do we really expect LLM providers to self-regulate responsibly and take all of these steps without oversight?
0
The anti-DEI movement and the interests of social media & AI provider are aligned here. And technology powers have never been more politically active than today. Coincidences?x

Or even to take the steps necessary to identify the key stakeholders and the key areas of risk in the first place? How many lives can and will be negatively impacted by an AI-generated decision-making tool of one form or another that bases its decision on something other than โ€œthe content of their character,โ€ but no one notices? Or worse, no independent agent is responsible for attempting to prevent such algorithmic discrimination in the first place? And with AI slipping through the cracks of existing third-party liability law that was not designed to mitigate the novel risks AI gives rise to?4

And the coming pervasiveness of AI in all areas is another discrimination-force-multiplier. At least with human referees, there is a range of quality and/or bias from referee to referee and generally speaking the quality increases and bias decreases the higher the level you get. And you will play under at least some referees who do not demonstrate egregious bias at least some of the time. But with AI, even if overall discrimination is reduced, whatever latent discrimination that remains is likely to become far more pervasive. Whether and to what degree this should be tolerated should depend on the context, with employment-decision tools bearing a much higher degree of scrutiny than an AI-powered basketball referee.


“The greatest trick the devil ever pulled was convincing the world he did not exist.” – The Usual Suspects

IV. Conclusion

The sad reality is my daughterโ€™s decision to quit basketball was entirely rational under the circumstances. The type of discrimination she faced on the basketball court playing with the boys was quite literally impossible for her to ever overcome. Who wouldnโ€™t rationally choose to devote their time and energy to another endeavor where the playing field is far more level, presuming of course there is another one available to begin with?5

It is the law of the land that no woman or minority can be forced to have to make such a choice where it really counts, including when seeking employment. Title VII of the Civil Rights Act is fundamentally a โ€œno-old-boys networkโ€ law. The 14th Amendment of the Constitution, in principle, effectively removes from the table the possibility of individual states depriving American citizens of equal opportunity and equal protection under the law.

AI can be the cure, in theory, if implemented responsibly. But it is definitely a poisonโ€”if not the poison of all poisonsโ€”if it is not, with AI threatening to serve as the perfect sexism and racism institutionalization tool.

It is exceedingly hard to live up to the American ideal of equal opportunity and avoid having one group of individuals actively marginalizing another, all purportedly in the name of one American ideal or another. Competing in the global race for AI supremacy but doing so responsibly and yet mitigating the risks of government overreach or excessive litigation may well be the greatest societal challenge of our lifetimes. But being an American and fighting for and living up to the American way in the face of all challengesโ€”domestic or foreign; human or AIโ€”is supposed to be hard. The hard is what makes it great.

ยฉ 2025 Ko IP & AI Law PLLC


Loading
  1. James E. Baker, et al., An Introduction to Artificial Intelligence for Federal Judges, The Federal Judicial Center (2023), at 39, available at https://www.fjc.gov/sites/default/files/materials/47/An_Introduction_to_Artificial_Intelligence_for_Federal_Judges.pdf. โ†ฉ๏ธŽ
  2. There are legitimate concerns on both sides of this issue. The burden of proof, however, should properly fall on the LLM providers to establish why their trade secret rights trump the 14th Amendment rights of individuals, if/when the rights of such individuals are implicated by the implementation of AI. โ†ฉ๏ธŽ
  3. Id. โ†ฉ๏ธŽ
  4. For five-part blog articles series on Parsing the Blame for AI, see Part I, Part II, Part III, Part IV, and Part V. โ†ฉ๏ธŽ
  5. By way of example, an entire season of boys varsity soccer yielded in my estimation one total sexist foul call and/or non-call, as opposed to the multiple instances a game we were accustomed to with basketball. This is not because soccer refs are any more enlightened; it is simply easier to be more accurate and consistent with foul calls in soccer than it is in basketball.

    When implementing AI, it should be our first priority to identify those areas of specific, heightened risk of algorithmic bias and to take affirmative steps to mitigate, with representation and inputs from the key impacted stakeholders. โ†ฉ๏ธŽ
0 0 votes
Article Rating
simple-ad

This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x