Rahman AI legal case revealed fake citations used in court, as judge accuses barrister of misleading tribunal with AI-generated legal authorities
Rahman AI legal case has prompted serious concerns within the legal profession after a judge found that a barrister had used generative artificial intelligence to draft legal arguments containing entirely fictitious or irrelevant case law.
Also read: National orientation agency unveils n22.5m animation award
In a rare judicial postscript, Upper Tribunal Judge Mark Blundell accused immigration barrister Chowdhury Rahman of attempting to mislead the court by submitting an appeal grounded in false legal authorities, apparently generated by ChatGPT-like software.
Rahman, who represented two Honduran asylum seekers facing gang-related persecution, cited 12 legal cases in his written arguments.
Upon review, the judge found that some of these cases did not exist, while others bore no relevance to the legal propositions put forward.
“Not one decision supported the proposition of law set out in the grounds,” said Judge Blundell, calling the submissions “misleading” and a waste of tribunal time.
While Rahman claimed the issues stemmed from his “drafting style” and admitted to using “various websites” for legal research, the judge was unconvinced.
He stated it was “overwhelmingly likely” that Rahman had used generative AI to formulate the appeal and then tried to hide that fact during proceedings.
Blundell added:
“Even if Mr Rahman thought these cases somehow supported his arguments, he cannot explain the entirely fictitious citations.”
The ruling, handed down in September but published this week, sets a precedent in the UK for how the legal system may respond to the uncritical use of AI in formal court submissions.
Blundell also suggested he was considering referring Rahman to the Bar Standards Board, further escalating the matter beyond judicial censure.
The barrister’s conduct was revealed during an asylum appeal for two sisters, aged 29 and 35, whose case had reached the upper tribunal.
Despite the gravity of the claims, the appeal was dismissed outright, with the judge finding no legal error in the initial decision.
Also read: National orientation agency unveils n22.5m animation award
This case adds to growing concerns globally about the unregulated use of generative AI in legal and professional contexts, especially when outputs are presented as fact without verification.

Discover more from Freelanews
Subscribe to get the latest posts sent to your email.