ANCHORAGE, Alaska, January 3, 2026 – The Alaska Court System has temporarily disabled its AI chatbot, launched in November 2025 to assist the public with procedural questions, following reports of inaccurate responses and concerns over potential privacy risks and unauthorized legal advice.
Despite these, users encountered errors, such as outdated fees or conflicting procedures, leading to complaints from lawyers and self-represented litigants.
The Alaska Bar Association raised objections in a December letter, arguing the chatbot could mislead vulnerable users into relying on it as authoritative guidance, potentially constituting unauthorized practice of law.
Privacy advocates noted the risks of individuals sharing sensitive case details, even though conversations were not stored or used for training purposes.
Court spokesman Jonathon Lack confirmed the pause in late December for review: “We’re committed to improving access to justice, but AI tools in this context require rigorous accuracy and safeguards.” Administrators plan enhancements, including better fact-checking and human oversight, before relaunch.
The initiative aimed to help the over 80% of civil cases involving self-represented parties in Alaska, a state with geographic barriers to legal aid. Similar court chatbots in Utah and Texas have faced comparable scrutiny for “hallucinations”—fabricated information—and bias potential.
Experts like Stanford’s Daniel Ho advocate “human-in-the-loop” systems for high-stakes public services. The episode reflects broader challenges in deploying generative AI for government functions, balancing efficiency with reliability.
As courts nationwide test AI for administrative support, Alaska’s experience serves as a cautionary case for careful implementation.