The clash between artificial intelligence and traditional legal norms reached a boiling point in a New York courtroom last month when a 74-year-old entrepreneur attempted to argue his case using an AI-generated avatar. Jerome Dewald, representing himself in an employment dispute against MassMutual Metro, presented a video of a digitally created lawyer named “Jim” — a youthful, sweater-clad avatar that bore no resemblance to Dewald himself. The stunt backfired immediately when judges realized the “attorney” was synthetic, prompting a sharp rebuke from Associate Justice Sallie Manzanet-Daniels: “You will not use this courtroom as a platform for your business, sir.”
Judicial pushback against AI’s encroachment into legal proceedings is hardening nationwide. New York’s Supreme Court isn’t alone in rejecting unvetted technological shortcuts — an Illinois Supreme Court policy now mandates strict oversight of AI-generated evidence, while a Saratoga County judge recently subjected Microsoft’s Copilot chatbot to direct questioning about its reliability. These developments underscore a growing consensus: while AI may assist with legal research or administrative tasks, courts demand transparency and human accountability. “When attorneys and experts abdicate their judgment to AI, the quality of our legal profession suffers,” warned Minnesota federal judge Katherine Menendez in a recent ruling.
The Dewald incident exposes deeper ethical fault lines. His startup, Pro Se Pro, markets AI avatars as tools for self-representation — a concept critics argue prioritizes cost-cutting over constitutional safeguards. While Dewald claims his throat cancer impedes prolonged speech, judges counter that medical accommodations exist without resorting to synthetic advocates. The episode follows a pattern of AI overreach: lawyers fined for citing ChatGPT-invented cases, a “robot lawyer” firm penalized $193,000 for false advertising, and experts caught relying on AI for financial calculations they couldn’t explain.
Proponents of courtroom AI point to pilot programs in India and the Philippines, where governments deploy machine learning to translate legal documents and predict case outcomes. Yet these initiatives operate under rigorous oversight frameworks absent in Dewald’s entrepreneurial experiment. Even tech-forward jurisdictions like California’s Sandiganbayan court limit AI to voice-to-text transcription, avoiding discretionary tasks. “Efficiency cannot come at the expense of due process,” emphasized Senior Associate Justice Marvic Leonen of the Philippines’ Supreme Court during recent AI governance talks.
As debates intensify, legislative momentum grows to codify AI’s role in justice systems. New York’s judiciary now requires pre-trial hearings for AI-generated evidence, while federal proposals mandate disclosure of AI use in legal filings. For Dewald, the lesson is clear: innovation must respect institutional boundaries. His case — dismissed with prejudice — serves as a cautionary tale for those prioritizing technological novelty over courtroom decorum. While AI may someday streamline judicial workflows, its present role remains firmly in the hands of human practitioners who understand that justice requires more than algorithmic outputs.