-
Every Course Needs Learning Outcomes by August
The ABA's revised accreditation standards require law schools to establish measurable learning outcomes for every course, align them to programmatic outcomes, and build formative assessments into the first year — all by the 2026-2027 academic year. Most schools are not staffed for this work. An LLM can help with the drafting. It cannot help with the judgment calls that make the drafting worthwhile.
Read more -
Colorado Wants to Regulate Your AI — and You Are the Deployer
Colorado's AI Act takes effect on June 30, and its deployer obligations apply to anyone who uses AI as a substantial factor in consequential decisions — including law firms. 'Legal services' is one of the statute's eight enumerated categories. Most of the legal profession has not grappled with the fact that it is on the regulated side of this law.
Read more -
The Errors Are More Interesting Than the Apology
Sullivan & Cromwell’s AI-contaminated bankruptcy filing has drawn coverage for the firm’s apology. The three-page errata is more revealing: errors that suggest AI corrupted correct citations during editing, a compliance program that failed despite being rigorous, and a supervision obligation the firm’s letter concedes without naming.
Read more -
The Trained Volunteer Lost. The Chatbot Should Worry.
A federal court dismissed Upsolve's challenge to New York's unauthorized-practice-of-law rules, holding that trained non-lawyers cannot give individualized legal advice — even for free, even with safeguards, even with disclaimers. The opinion never mentions AI. But it describes AI legal tools more precisely than any opinion that has.
Read more -
New York Wants to Ban Your Chatbot From Answering Questions
New York Senate Bill S7263 would impose civil liability on chatbot proprietors whose systems provide 'substantive' responses in areas reserved for licensed professionals — and declares that disclosing the chatbot's non-human status is not a defense. The bill's impulse is understandable, but its mechanism confuses information with advice and would suppress exactly the kind of public legal education that existing law permits.
Read more -
Building Infrastructure with AI: A Case Study
A law professor with no engineering background used Claude, Cowork, ChatGPT, and Gemini to design and deploy a self-hosted news aggregation pipeline over a weekend. The project worked — not because AI eliminated the need for technical skill, but because the skills it required turned out to be the ones lawyers already have.
Read more -
The Model Will Not Push Back
Hallucination gets the headlines, but sycophancy may be the more dangerous failure mode for lawyers. An LLM that systematically validates your reasoning instead of challenging it functions as a mirror, not a counsel. And mirrors make poor advisors.
Read more -
Delegate the Task, Not the Judgment
LLMs are good at generating options, structuring information, and doing legwork. They are not good at deciding what matters. The most common mistake lawyers make with AI is not using it on the wrong task — it is asking it to exercise judgment they should be exercising themselves.
Read more -
What Your AI Forgets Mid-Sentence — And What to Do About It
LLMs degrade predictably as their context windows fill — losing track of middle-document content, dropping earlier conversation history, and producing confident output built on incomplete inputs. For lawyers using these tools on long documents, the question is not whether it happens but how to structure your work to prevent it.
Read more