Why your ChatGPT history could end up in Court

A person holding a smartphone displaying the ChatGPT logo on a green screen.
Legal experts suggest that metadata from AI platforms could link users to their digital footprints under Kenyan data laws | Mjengo Hub
Lawyers warn that private AI chats are not as anonymous as they seem, with Kenyan data laws potentially allowing these conversations to be used as evidence in legal disputes.

A fresh US court decision has Kenyan lawyers urging caution over what users type into popular AI chatbots. The February 2026 ruling by a New York federal judge ordered chat logs from Anthropic’s Claude to be handed to prosecutors in a securities fraud prosecution. Kenyan legal experts now say the same risks apply here.

Mary Audi and Fridah Muriithi, advocates at Muri Mwaniki Thige & Kageni Advocates, point out that users often assume their exchanges stay hidden. In reality, the platforms log metadata including IP addresses, device details and account information. Under Kenya’s Data Protection Act 2019, that information counts as personal data and can identify the person behind an anonymous-sounding prompt.

Even deleting a conversation does not wipe it from the company’s servers or training data. The lawyers stress that entering details into these systems hands information to a third-party provider. That single step removes any claim to the sort of confidentiality that protects talks with a doctor or licensed attorney.

Purity Wanja, another advocate at the same firm specialising in litigation, explains how Kenyan courts already handle digital evidence. WhatsApp messages, emails and screenshots routinely appear in cases once they meet basic authentication rules under the Evidence Act. AI chat logs, prompts and generated responses fall into the same category. Sections 106B and 78A of the Act make electronic records admissible if properly linked to the user and system.

Wanja notes that courts treat raw AI output with scepticism because the technology can invent facts. Yet a user’s own words in the prompts can still carry weight. A request for help covering up records or crafting misleading statements could serve as circumstantial evidence of intent. Opposing lawyers in civil disputes can seek disclosure orders when the material is relevant to the case.

The data itself rarely stays in Kenya. Most AI companies store information in the United States or Europe. Kenyan authorities would need a court order or mutual legal assistance treaty to reach it. Enforcement becomes harder once borders are crossed, but the lawyers say foreign legal processes can still pull in records involving Kenyan users.

Users rarely read the long terms of service. Those agreements make clear that inputs may be reviewed, stored or shared with authorities. The advocates say this falls short of the informed consent required under the Data Protection Act. People typing in financial details, health records, contract drafts or client files are effectively processing personal data without realising the exposure.

The US case that triggered the warnings involved Bradley Heppner, a former executive facing fraud charges. While represented by counsel, he used the AI tool on his own to draft defence documents. Judge Jed Rakoff ruled there was no attorney-client relationship with the chatbot. The AI had no duty of confidentiality and its terms explicitly warned against treating outputs as legal advice. Thirty-one documents were released to prosecutors.

Kenyan lawyers say the ruling serves as a practical reminder rather than a new legal principle. Attorney-client privilege simply does not extend to conversations with a machine. Even when a qualified lawyer later reviews AI-generated drafts, the original prompts and intermediate outputs may remain discoverable.

High Court decisions in recent months have already shown judges grappling with AI in legal filings. Some pleadings prepared with AI assistance have faced scrutiny over accuracy and authenticity, though the focus there was on court documents rather than private chats.

For ordinary Kenyans using these tools for quick advice on contracts, disputes or personal matters, the message is consistent. Treat every input as something that could surface later. Avoid real names, national ID numbers, case references or confidential proposals. The safest approach is to keep sensitive information out entirely.

Digital evidence rules have evolved quickly in Kenya. Courts now accept informal messages as proof of agreements, as seen in a recent High Court matter involving WhatsApp exchanges that formed a binding contract. AI chats occupy the same digital space. What feels like a private brainstorming session can become part of the official record once litigation starts.

The lawyers stop short of saying users should abandon AI tools altogether. They recommend sticking to non-sensitive queries and using enterprise versions where available, though even those have not been fully tested in Kenyan courts. The core warning remains: the conversation is never truly private.

Comments (0)

Leave a Comment

0/1000 characters

No comments yet. Be the first to share your thoughts!