I. “…on the basis of sex…”
See Part 1 here.
II. โR-E-S-P-E-C-T. Find out what it means to meโฆ.โ
See Part 1 here.
III. The Old-Boysโ Network 2.0: Discrimination in the Age of AI
See Part 1 here.
A. The glass ceiling endures: Progress without parity
See Part 1 here.
B. AI is a poison for sureโฆ
See Part 1 here.
C. โฆ and a cure, perhaps….
Having said all of this, I would still take a properly (i.e., responsibly ) AI trained referee in a heartbeat over a human one for the specific context of a boys basketball game that my daughter is playing in. A responsibly trained AI would include affirmatively placing โguardrailsโ to mitigate this or any other bias.
Such โguardrailsโ could include:
- training only on โtrustedโ referees that are separately evaluated as being more objective and accurate with their calls,
- specifically constraining the model from considering race or genderโand other variables correlated with race or genderโas correlative variables that can be used to train the model, and
- specifically having the model consider race or gender as correlative variables that can be used to train the model, comparing the foul rate between races and genders, and assessing whether discrimination against particular groups is present, and if so, determining how to modify model development to mitigate this.
Whatever biases that might remain should pale in comparison with those latent in human referees, in the same way that autonomous vehicles will likely still get into some accidents but will be far safer than human drivers overall.
D. โฆIF, and only if, properly regulated.
The following is guidance provided to federal judges for evaluating whether appropriate steps have been taken by an AI provider to mitigate against bias (i.e. algorithmic discrimination) in the implementation of its AI system.
[cont’d โ]

Still, the law of the land…



Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
With AI as with people, some bias is always present. But steps can be taken to minimize the risk. One mitigator is sound processโtimely, contextual, and meaningful. For policymakers and engineers, โtimelyโ means at points where input can directly in๏ฌuence outcomes, i.e., at the conception, design, testing, deployment, and maintenance phases of AI development and use. โContextualโ means speci๏ฌc to the tool and use in question and with actual knowledge of its purposes, capabilities, and weaknesses. โMeaningfulโ means independent, impartial, and accountable. Speci๏ฌcally, the person using or designing an application should validate its ethical design and use. If a particular community or group of people is likely to be affected by the use of the tool, designers and policymakers should consult with that community or group in deciding whether and how to develop, design, or use it. In addition, to the extent feasible, the systemโs parameters should be known, or retrievable. The system should be subject to a process of ongoing review and adjustment. The rules regarding the permissible use, if any, of social identifying descriptors or proxies should also be enunciated, clear, transparent, and subject to constitutional and ethical review. For judges and litigators, sound process also means the careful application of the Rules of Evidence to AI-generated evidence and tools on the record. – An Introduction to Artificial Intelligence for Federal Judges1

Transparency is the factor that gets the most attention and perhaps deservedly so. In particular given the purported zero-sum game between this inaccuracy or bias-mitigation prerequisite and the trade secret rights of the LLM and AI providers.2
But the key question for probing for bias in AI systems in my estimation is: โWere stakeholdersโgroups likely to be a๏ฌected by the AI applicationโconsulted in its conception, design, development, operation, and maintenance?โ3
Or even to take the steps necessary to identify the key stakeholders and the key areas of risk in the first place? How many lives can and will be negatively impacted by an AI-generated decision-making tool of one form or another that bases its decision on something other than โthe content of their character,โ but no one notices? Or worse, no independent agent is responsible for attempting to prevent such algorithmic discrimination in the first place? And with AI slipping through the cracks of existing third-party liability law that was not designed to mitigate the novel risks AI gives rise to?4
And the coming pervasiveness of AI in all areas is another discrimination-force-multiplier. At least with human referees, there is a range of quality and/or bias from referee to referee and generally speaking the quality increases and bias decreases the higher the level you get. And you will play under at least some referees who do not demonstrate egregious bias at least some of the time. But with AI, even if overall discrimination is reduced, whatever latent discrimination that remains is likely to become far more pervasive. Whether and to what degree this should be tolerated should depend on the context, with employment-decision tools bearing a much higher degree of scrutiny than an AI-powered basketball referee.
“The greatest trick the devil ever pulled was convincing the world he did not exist.” – The Usual Suspects
IV. Conclusion
The sad reality is my daughterโs decision to quit basketball was entirely rational under the circumstances. The type of discrimination she faced on the basketball court playing with the boys was quite literally impossible for her to ever overcome. Who wouldnโt rationally choose to devote their time and energy to another endeavor where the playing field is far more level, presuming of course there is another one available to begin with?5
It is the law of the land that no woman or minority can be forced to have to make such a choice where it really counts, including when seeking employment. Title VII of the Civil Rights Act is fundamentally a โno-old-boys networkโ law. The 14th Amendment of the Constitution, in principle, effectively removes from the table the possibility of individual states depriving American citizens of equal opportunity and equal protection under the law.
AI can be the cure, in theory, if implemented responsibly. But it is definitely a poisonโif not the poison of all poisonsโif it is not, with AI threatening to serve as the perfect sexism and racism institutionalization tool.
It is exceedingly hard to live up to the American ideal of equal opportunity and avoid having one group of individuals actively marginalizing another, all purportedly in the name of one American ideal or another. Competing in the global race for AI supremacy but doing so responsibly and yet mitigating the risks of government overreach or excessive litigation may well be the greatest societal challenge of our lifetimes. But being an American and fighting for and living up to the American way in the face of all challengesโdomestic or foreign; human or AIโis supposed to be hard. The hard is what makes it great.
ยฉ 2025 Ko IP & AI Law PLLC
- James E. Baker, et al., An Introduction to Artificial Intelligence for Federal Judges, The Federal Judicial Center (2023), at 39, available at https://www.fjc.gov/sites/default/files/materials/47/An_Introduction_to_Artificial_Intelligence_for_Federal_Judges.pdf. โฉ๏ธ
- There are legitimate concerns on both sides of this issue. The burden of proof, however, should properly fall on the LLM providers to establish why their trade secret rights trump the 14th Amendment rights of individuals, if/when the rights of such individuals are implicated by the implementation of AI. โฉ๏ธ
- Id. โฉ๏ธ
- For five-part blog articles series on Parsing the Blame for AI, see Part I, Part II, Part III, Part IV, and Part V. โฉ๏ธ
- By way of example, an entire season of boys varsity soccer yielded in my estimation one total sexist foul call and/or non-call, as opposed to the multiple instances a game we were accustomed to with basketball. This is not because soccer refs are any more enlightened; it is simply easier to be more accurate and consistent with foul calls in soccer than it is in basketball.
When implementing AI, it should be our first priority to identify those areas of specific, heightened risk of algorithmic bias and to take affirmative steps to mitigate, with representation and inputs from the key impacted stakeholders. โฉ๏ธ
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.