Skip to main content
David S. Kemp
  • Projects
  • Presentations
  • Publications
  • Blog
  • About
Blog

Learning Machines


Lawyers are learning to work with artificial intelligence. Artificial intelligence is learning to work with law. This blog explores how — through pedagogy, practice, policy, and the ethical questions that connect them.

  • Every Course Needs Learning Outcomes by August

    April 29, 2026 AI Legal Education Legal Technology

    The ABA's revised accreditation standards require law schools to establish measurable learning outcomes for every course, align them to programmatic outcomes, and build formative assessments into the first year — all by the 2026-2027 academic year. Most schools are not staffed for this work. An LLM can help with the drafting. It cannot help with the judgment calls that make the drafting worthwhile.

    Read more
  • Colorado Wants to Regulate Your AI — and You Are the Deployer

    April 27, 2026 AI Legal Ethics Legal Technology Compliance

    Colorado's AI Act takes effect on June 30, and its deployer obligations apply to anyone who uses AI as a substantial factor in consequential decisions — including law firms. 'Legal services' is one of the statute's eight enumerated categories. Most of the legal profession has not grappled with the fact that it is on the regulated side of this law.

    Read more
  • The Errors Are More Interesting Than the Apology

    April 24, 2026 AI Legal Ethics Legal Technology

    Sullivan & Cromwell’s AI-contaminated bankruptcy filing has drawn coverage for the firm’s apology. The three-page errata is more revealing: errors that suggest AI corrupted correct citations during editing, a compliance program that failed despite being rigorous, and a supervision obligation the firm’s letter concedes without naming.

    Read more
  • The Trained Volunteer Lost. The Chatbot Should Worry.

    April 22, 2026 AI Legal Ethics Legal Technology

    A federal court dismissed Upsolve's challenge to New York's unauthorized-practice-of-law rules, holding that trained non-lawyers cannot give individualized legal advice — even for free, even with safeguards, even with disclaimers. The opinion never mentions AI. But it describes AI legal tools more precisely than any opinion that has.

    Read more
  • New York Wants to Ban Your Chatbot From Answering Questions

    April 20, 2026 AI Legal Ethics Legal Technology

    New York Senate Bill S7263 would impose civil liability on chatbot proprietors whose systems provide 'substantive' responses in areas reserved for licensed professionals — and declares that disclosing the chatbot's non-human status is not a defense. The bill's impulse is understandable, but its mechanism confuses information with advice and would suppress exactly the kind of public legal education that existing law permits.

    Read more
  • Building Infrastructure with AI: A Case Study

    April 17, 2026 AI Legal Technology Prompt Engineering

    A law professor with no engineering background used Claude, Cowork, ChatGPT, and Gemini to design and deploy a self-hosted news aggregation pipeline over a weekend. The project worked — not because AI eliminated the need for technical skill, but because the skills it required turned out to be the ones lawyers already have.

    Read more
  • The Model Will Not Push Back

    April 15, 2026 AI Legal Ethics Legal Technology

    Hallucination gets the headlines, but sycophancy may be the more dangerous failure mode for lawyers. An LLM that systematically validates your reasoning instead of challenging it functions as a mirror, not a counsel. And mirrors make poor advisors.

    Read more
  • Delegate the Task, Not the Judgment

    March 31, 2026 AI Legal Technology Prompt Engineering

    LLMs are good at generating options, structuring information, and doing legwork. They are not good at deciding what matters. The most common mistake lawyers make with AI is not using it on the wrong task — it is asking it to exercise judgment they should be exercising themselves.

    Read more
  • What Your AI Forgets Mid-Sentence — And What to Do About It

    March 30, 2026 AI Legal Technology Prompt Engineering

    LLMs degrade predictably as their context windows fill — losing track of middle-document content, dropping earlier conversation history, and producing confident output built on incomplete inputs. For lawyers using these tools on long documents, the question is not whether it happens but how to structure your work to prevent it.

    Read more
  • You Probably Have a Duty to Warn Your Clients About ChatGPT

    March 27, 2026 AI Legal Ethics Attorney-Client Privilege

    Heppner established that consumer AI conversations are not privileged. But the case also raises an uncomfortable question for practicing lawyers: if a known hazard to the privilege now exists, do you have a duty to warn your clients about it? The answer, under existing ethics rules, is almost certainly yes.

    Read more
  • The API Is Not a Compliance Strategy

    March 23, 2026 AI Data Privacy Compliance

    Using an LLM through an API rather than a consumer chatbot improves your data-handling posture — sometimes dramatically. But an API alone does not satisfy FERPA, HIPAA, or any other regulatory framework, and treating it as though it does mistakes a technical control for a legal one.

    Read more
  • Your AI Conversations Are Not Confidential — And a Federal Court Just Said So

    March 20, 2026 AI Legal Ethics Data Privacy

    A comparison of Anthropic's data-handling policies across Claude's consumer and commercial tiers — and why the distinction now carries real legal consequences after the SDNY's decision in United States v. Heppner.

    Read more

© 2026 David S. Kemp.

  • Projects
  • Blog
  • About
  • RSS

The views expressed on this blog are the author's own and do not represent those of any employer, institution, or affiliate. Nothing here constitutes legal advice. The author uses Claude Opus to assist with website and blog design and content.