Ko IP & AI Law PLLC

Arizona patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement your AI, and mitigate the risks.

What Getting Fired Taught Me About Due Process—Why AI Can’t Deliver Justice… [Part 2 of 3]

If bias, fear, and risk management skew human decision-making as discussed in Part I of this article (Sects. I-V, see here), wouldn’t an AI judge be better? Cleaner, faster, less emotional? In certain respects, unequivocally yes, in particular where “politics” are involved.

But in this Part 2, we’ll explore why artificial intelligence—far from solving our problems of bias and injustice—may actually make them worse, embedding them deeper, shielding them from scrutiny, and eroding the human judgment that due process was designed to protect.

I. No Warning. No Hearing. No Process

II. Due Process Guarantees Fair Procedures Before The Government Can Deprive a Person of Life, Liberty, or Property

VI. The Perils of AI-Driven Adjudication

A. And you thought your boss lacked humanity…

At its core, adjudication is not just about applying rules to facts. It is about interpreting meaning, weighing values, and understanding how decisions ripple through real human lives. Even the most sophisticated AI cannot perform this task because it fundamentally lacks the capacities that justice demands: emotional understanding, cultural sensitivity, and respect for unwritten social norms.

1. AI cannot evaluate human emotions

An AI can detect certain patterns in data—words, voice tones, maybe even facial expressions—but it cannot grasp the human experience of dignity harmed or trust betrayed. Reputational damage, emotional distress, and personal humiliation often shape the gravity of a legal dispute far more than any measurable economic loss. A purely data-driven system will miss these nuances because they are not data points—it will recognize only what can be quantified, not what can be felt.

Nor can AI assess the presence—or absence—of good faith. As discussed in Part I, questions of good faith often lie at the heart of disputes involving discipline, employment, and personal integrity. Determining whether someone acted out of sincere belief, professional judgment, or retaliatory impulse requires interpreting motivations, contextual signals, and emotional undercurrents that no algorithm can meaningfully perceive. Good faith is not a binary data point; it is a nuanced moral and human evaluation.

When a system can’t tell the difference between a good-faith error and a bad-faith abuse, it cannot deliver justice—only technical outcomes, stripped of the meaning that makes due process real.

2. AI can’t grasp “unwritten rules”

Life, like tennis, runs on more than written regulations. There are basic principles of fair play, sportsmanship, and etiquette that experienced humans understand without needing explicit instruction. When disputes arise over “good sportsmanship,” for example, the question is not whether a technical rule was broken—it’s whether the spirit of fair competition was undermined.

No machine can intuit the gap between what is legally permissible and what is socially unacceptable. It is hard enough for a non-tennis player to understand the “tennis etiquette” that has developed around the unique rules and circumstances of tennis, but at least humans have some shared biology and experiences that allow them to possibly comprehend it when explained.1 AI has neither. An AI judge cannot “get it”—not now, not ever.

[cont’d ↗]

Ko IP & AI Law PLLC logo

3. AI can’t assess value judgments

Fairness is not a universal algorithm. Communities often hold different norms about what is “reasonable,” “equitable,” or “proportional” based on history, tradition, and shared lived experience. In a rural school district, firm coaching might be viewed as essential discipline; in a suburban one, it might be framed as inappropriate aggression. A human judge, ideally, draws upon a lifetime of cultural literacy to weigh these factors. An AI system, in contrast, processes numbers—not narratives—and will almost inevitably impose homogenized standards that flatten crucial local differences.

Without emotional understanding, unwritten social context, or cultural sensitivity, an AI-driven adjudication—no matter how technically sophisticated—will always be hollow. It risks replacing nuanced, human justice with cold, brittle logic that may achieve formal consistency but profound substantive injustice.

B. We receive AI “recommendations” at our peril: The new self-fulfilling prophecy risk

Even if we resist the temptation to let AI make final decisions, the danger doesn’t end there. Using AI to “assist” human adjudicators—by offering risk scores, classifications, or “recommendations”—poses its own serious threat to due process. Once a machine offers an “answer,” it changes the entire dynamic of decision-making. Recommendations that are supposed to inform judgment instead begin to shape it, constrain it, and, eventually, replace it. The human role shrinks—not because the machine is wiser, but because it has seized the psychological high ground of apparent neutrality and objectivity. The risks here are no less grave; they are simply harder to see.

1. Automation Bias: Overtrusting the Machine

One of the most insidious threats to due process in an AI-assisted system is automation bias: the human tendency to overtrust machine outputs, even when they are flawed, incomplete, or misleading.2 Once an AI system labels a person as a “risk,” “problem,” or “outlier,” that label takes on a life of its own. It feels objective. Scientific. Beyond question.

The problem is, machines don’t actually understand fairness, credibility, or context. They simply detect patterns—and the patterns they see are shaped by the data they are trained on, which itself often carries hidden biases from the past. Yet human decision-makers, faced with a confident-looking algorithmic score or recommendation, often defer to it instinctively.

Examples of automation bias are all around us. Drivers have followed Google Maps instructions directly into rivers and construction sites because the app “said so.”

The psychological drivers are powerful:

  • Machines seem neutral, mathematical, and objective.
  • Outsourcing judgment to an algorithm feels safer, both professionally and psychologically, than making a morally difficult decision alone.
  • And in messy, high-stakes situations—where facts are contested, motivations are murky, and outcomes have lasting consequences—it can feel reassuring to have a “scientific” score to lean on.

But leaning on that score abdicates the human responsibility at the heart of due process: the duty to exercise independent, contextualized judgment based on the evidence, the law, and the lived realities of those involved.


2. Record contamination: You can’t unring the bell

Once an AI system generates a recommendation, a score, or a classification, it doesn’t simply “inform” the human decision-maker—it infects the entire decision-making record. And once it’s in the record, it cannot easily be ignored, dismissed, or forgotten.

A human adjudicator who agrees with the AI recommendation risks rubber-stamping an unexamined conclusion. But a human adjudicator who disagrees faces an equally perilous situation: the burden of explaining why they deviated from what now appears to be a “neutral,” “scientific,” “expert” output. The mere existence of the AI’s recommendation shifts the baseline. It forces the human decision-maker into a defensive posture, subtly (or not so subtly) framing independent judgment as the deviation that must be justified.3

This dynamic doesn’t just affect trial-level decision-makers. On appeal, reviewing bodies may second-guess any divergence from an AI-generated recommendation, even if the original adjudicator had sound human reasons for reaching a different conclusion. Over time, this pressure calcifies into institutional norms: follow the system unless you have an extraordinary reason not to.

You can’t unring the bell. Once the machine’s “view” enters the proceeding, it redefines the center of gravity for everyone involved. Human discretion isn’t guided by the AI; it is corralled by it. And with each decision made in deference to the machine, human judgment shrinks a little more.

3. Crossing the Line: AI Outputs as De Facto Expert Testimony

In federal court, the admissibility of expert opinions is governed by Daubert v. Merrell Dow Pharmaceuticals, Inc.4 Under Daubert, judges must act as gatekeepers, ensuring that any expert testimony rests on reliable methodology, is properly applied to the facts at hand, and will genuinely assist the trier of fact. Without this rigorous scrutiny, an “expert opinion” cannot and should not be presented to influence a legal decision.

Due process requires Daubert-level vigilance against AI-generated recommendations being admitted into proceedings—or worse, shaping internal decision-making processes. Is the AI’s methodology reliable? Is its training data is biased? What are its error rates, and are they acceptable? Are its techniques generally accepted in any relevant field?

If a human expert could not simply walk into court and pronounce someone a “risk” without laying the evidentiary groundwork demanded by Daubert, neither should an algorithm. Before an AI output enters the record—whether in a courtroom, a government agency decision, or even an internal administrative hearing—it must meet at least the same basic standards of reliability and relevance.5

VII. Conclusion: The Path We Must Not Take

Artificial intelligence promises speed, consistency, and freedom from human frailty—but when it comes to decisions about life, liberty, dignity, or reputation, it offers only a different form of blindness. AI cannot understand the emotional weight of reputational harm, the subtlety of unwritten rules, or the deeply human values that shape fairness. Worse, once AI-generated recommendations enter the decision-making record, they exert a gravitational pull on human judgment that few adjudicators can truly resist.

Left unchecked, AI-driven adjudication will not eliminate bias; it will automate injustice at scale—faster, colder, and with fewer safeguards than even our flawed human systems.

Importantly, the consistency that some believe is “lost” by relying on independent human judges is not a weakness—it is a profound strength. Just as our fifty states serve as fifty laboratories of democracy, so too does each judge, each adjudicator, offer a different, independent laboratory for the exercise of human judgment. While any given case may suffer an erroneous result, that error is subject to appeal, correction, and systemic learning. The overall process, enriched by diverse perspectives, is stronger, not weaker, for its human variation. A homogenized, algorithmic judiciary would calcify the errors, depriving the system of the course corrections that dissent, appeal, and human discretion allow.

If artificial intelligence cannot replace human judgment, the challenge becomes how to use it responsibly—to support fair, transparent decision-making without surrendering control.

Justice must remain a human responsibility—or it will cease to be justice at all.

© 2025 Ko IP & AI Law PLLC



Loading
  1. Yuval Noah Harari explains that human beings can understand each other’s decisions because we share the same biological makeup and cultural narratives, whereas computers do not share our biology, culture, or emotions, and thus their thought processes are fundamentally alien to us. Harari characterizes AI as an “alien intelligence”—not extraterrestrial, but nonhuman in its logic and decision-making, rendering its actions often opaque and unpredictable from a human perspective. Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI (Penguin Books 2024). ↩︎
  2. See Danielle Keats Citron, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age, 145–48 (W.W. Norton & Co. 2022) (discussing how automation bias causes individuals and institutions to overtrust algorithmic outputs, imbuing machine-generated labels with an unwarranted sense of objectivity and scientific authority). ↩︎
  3. See State v. Loomis, 881 N.W.2d 749 (Wis. 2016). In Loomis, the defendant challenged his sentence on the ground that the trial court improperly relied on a COMPAS proprietary algorithmic risk assessment report which he argued was both non-transparent and racially biased. The Wisconsin Supreme Court acknowledged concerns about the algorithm’s potential for racial bias but ultimately rejected the appeal, holding that there was sufficient other evidence in the record to support the trial court’s sentencing decision independently of the COMPAS score. Id. at 766–67.

    Independent analysis, notably by ProPublica in 2016, revealed that the COMPAS risk assessment tool exhibited racial bias: Black defendants were nearly twice as likely as white defendants to be falsely labeled “high risk” of reoffending (45% vs. 23%). Conversely, white defendants were more often incorrectly labeled “low risk” despite later reoffending (48% vs. 28%). This disparity, despite the model being formally race-neutral, highlights how embedded structural biases in data inputs can result in discriminatory algorithmic outcomes. See Julia Angwin, et al., Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica (May 23, 2016), available here.

    In the opinion of this author, the Wisconsin Supreme Court set a dangerous precedent. When an AI-generated recommendation is demonstrably tainted by bias, its introduction into the sentencing process contaminates the record, creates an impermissible risk of tainting the human decision-maker’s judgment, and should not be permitted at all. If we are serious about protecting due process, no AI tool that is acknowledged to embed discriminatory biases should be allowed to whisper in a judge’s ear. ↩︎
  4. 509 U.S. 579 (1993). ↩︎
  5. See FEDERAL JUDICIAL CTR., Artificial Intelligence: A Primer for Courts 35–40 (2024) (noting that AI outputs must be scrutinized for admissibility under existing expert evidence rules), available here. ↩︎
0 0 votes
Article Rating
simple-ad

This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x