Judge’s gavel, old clock and scale on dark background [Getty Images]

A Kenyan judge recently joked that when they asked an Artificial intelligence (AI) tool to help draft a ruling, the system replied, “Please confirm jurisdiction, standard of proof, and whether snacks will be provided.”

We laugh, but the joke lands close to home. AI is no longer a futuristic guest in the legal world. It has arrived, taken a seat, opened a laptop, and started taking notes.

Across the profession, AI is already summarising cases, reviewing contracts, suggesting arguments, and predicting litigation outcomes. The real question is no longer whether AI belongs in law. It is whether law can use AI without losing its human heartbeat. Because justice is not just logic. It is judgment. And judgment is deeply human.

Have you ever noticed something else? Every serious AI system carries a built in disclaimer. It tells you it can make mistakes. It tells you its answers are for reference only. There you have it. Even the machine warns us not to trust it blindly.

Judging is fundamentally human. It requires empathy, moral reasoning, and an understanding of context that no algorithm can replicate. When judges decide cases, they do more than analyse facts. They listen between the lines. They observe hesitation. They weigh intention. They sense when fairness requires firmness and when it requires mercy.

AI operates on pattern recognition and prediction. It can identify correlations but cannot comprehend morality. It can process precedent but cannot understand equity. This distinction isn’t technical; it’s philosophical, and it matters profoundly for justice. Justice without accountability is not justice at all. An algorithm can be efficient, but it cannot be held responsible. A judge signs a ruling and stands behind it. An algorithm produces an output and moves on. That difference matters.

The ethical line should be bright and unmistakable. AI may assist, but it must never judge. There is a safe and sensible role for AI in the justice system, but only if we defend a clear boundary around human decision making. AI can help locate precedent, summarise arguments, flag inconsistencies, and organise evidence. In that sense, it may become the most tireless research assistant the legal profession has ever known. Tireless, fast, and impressively organised, yes. But still an assistant.

This is not just caution. It is already emerging as a global consensus. Estonia, often cited as a pioneer in digital governance, uses AI tools to support court administration by anonymising documents and transcribing hearings. In small claims matters below a set financial threshold, semi automated systems can generate payment orders based on party inputs. Even then, human oversight remains built into the process, particularly on jurisdiction and service requirements. The machine prepares. The human validates.

The European Union has taken an even firmer regulatory stance. Under the EU AI Act, systems used in the administration of justice to assist with researching or interpreting facts and law are officially classified as high risk. That label is not symbolic. It triggers strict obligations around human oversight, transparency, and safeguards against what regulators call automation bias, the tendency to trust machine output simply because it is machine generated.

Singapore’s judiciary has also embraced a controlled adoption model. Courts use AI for legal research and are experimenting with AI assisted first drafts of judgments, but always subject to human review and responsibility. Its 2024 guidance on the use of generative AI by court users makes one requirement unmistakable: Any AI generated legal content must be independently verified by a legal professional before it is relied upon.

The pattern is clear across jurisdictions. Let AI support the work. Do not let it own the decision. Convenience must never outrun accountability.

The dangers are not science fiction. They are practical and already visible. AI sometimes produces legal authorities that sound perfectly real but do not exist. Lawyers around the world have already been embarrassed in court after relying on unverified AI citations.

AI can also inherit yesterday’s biases and present them as today’s conclusions. If historical decisions contained patterns of inequality, an unguarded system may quietly reproduce them. There is also the black box problem. Some AI tools cannot clearly explain how they reached a conclusion. In law, unexplained decisions are not just frustrating. They are unacceptable.

And then there is the human risk of convenience. The convenience of AI may tempt lawyers and judges to substitute quick answers for thoughtful analysis, risking what scholars call “digital deference”.

Kenya stands at a crucial juncture. In August 2025, the Judiciary announced its Artificial Intelligence Adoption Policy Framework a structured plan to guide AI integration in case management, legal research, and administrative support.

Our courts face real pressure and real backlogs. AI offers meaningful support in administration, research, and case management. Used properly, it can reduce delay and improve access to justice.

But adoption should be guided, not improvised. Rules should come before widespread deployment, not after problems arise.

If AI is to operate within the justice system, it must do so under clear guardrails. The first and most fundamental principle is that every AI-assisted output must be reviewed and approved by a qualified legal professional. AI may serve as the sharpest research clerk the courts have ever known, but it must never sit in the judge’s seat.

Transparency must follow closely behind. Courts and lawyers should be open about when and how AI tools are used in legal processes. Public trust depends not only on fair outcomes, but on confidence in the integrity of the process. Justice cannot be partially automated in secret.

In the balance between innovation and ethics, one truth endures: justice is, and must always remain, a human endeavour.