On April 7, 2025, New York State Senator Kristen Gonzalez introduced Senate Bill 7263, a two-page proposal to amend the General Business Law by adding a new Section 390-f. The bill passed the Senate Internet and Technology Committee 6-0 on February 25, 2026, reached the floor calendar the next day, and has a companion bill in the Assembly (A6545). It is short enough to read in five minutes and ambitious enough to reshape how AI systems operate across more than fifty licensed professions in the state.
The bill's core provision prohibits a chatbot "proprietor" from permitting its chatbot to "provide any substantive response, information, or advice, or take any action which, if taken by a natural person," would constitute a crime under Education Law Section 6512 (unauthorized practice of a licensed profession, a Class E felony) or Section 6513 (unauthorized use of a professional title, a Class A misdemeanor), or would violate the provisions of Judiciary Law Article 15 prohibiting the unauthorized practice of law. A willful violation triggers a private right of action for actual damages, attorneys' fees, and costs.
Disclosure: I am presently affiliated with Justia, whose mission is to make legal materials and information freely available to the public. The bill, if enacted in its current form, would directly threaten that mission and the work of every organization that uses AI tools to help people understand their legal rights. That affiliation informs my perspective, and I want to be transparent about it. But the arguments that follow are grounded in the text of the bill and the doctrine it invokes, and I would make them regardless.
What the bill covers
The bill's reach is determined by the licensed professions it incorporates by reference. Education Law Sections 6512 and 6513 apply to every profession licensed under Title 8 of the Education Law — articles 131 through 163, covering medicine, dentistry, nursing, pharmacy, physical therapy, psychology, social work, architecture, engineering, land surveying, public accountancy, veterinary medicine, dietetics, occupational therapy, speech-language pathology, acupuncture, athletic training, mental health counseling, and more. Judiciary Law Article 15, Section 478, separately prohibits any non-admitted person from practicing law, furnishing legal counsel, or holding out as entitled to practice law.
A chatbot that answers a question about how to read a blood pressure result could be providing a "substantive response" in an area governed by medical licensure. A chatbot that explains what a particular contract clause means could be furnishing legal counsel. A chatbot that tells a user how load-bearing calculations work could be practicing engineering. The bill does not distinguish between a system that holds itself out as a licensed professional and one that provides information about a licensed profession's subject matter. It prohibits any "substantive response, information, or advice" that would, if provided by a natural person, fall within the scope of a licensed practice.
That "if taken by a natural person" framing is the bill's central analytical move, and it is the source of most of its problems.
The line the bill cannot draw
New York's unauthorized-practice statutes were written for a specific problem: unlicensed humans holding themselves out as professionals, or performing professional services without the training, examination, and oversight that licensure requires. The statutes presuppose a natural person who could, in principle, obtain a license — and who either failed to do so or had one revoked. Applying the same statutes to AI outputs requires treating a chatbot's response as equivalent to a human professional's judgment, which means treating the delivery of information as equivalent to the rendering of professional advice.
But the law has never treated those as the same thing. A person can publish a book explaining how to read a contract without practicing law. A journalist can write a detailed article about the side effects of a medication without practicing medicine. A website can host the full text of every statute and regulation in the country — as Justia does — and provide tools for navigating them, without anyone suggesting that the website is practicing law. The distinction between providing information about a professional domain and practicing in that domain is embedded throughout the case law on unauthorized practice, and it exists for a reason: public access to information about law, medicine, engineering, and other regulated fields serves the same public interest that licensure itself is meant to protect.
S7263 collapses that distinction. The bill prohibits not just "advice" but "information" — and not just information that purports to be professional advice, but any "substantive response" that falls within the subject matter of a licensed profession. The word "substantive" does no limiting work here; it excludes only responses that are trivial or empty. A chatbot that explains what a statute says is providing a substantive response about a legal topic. A chatbot that describes the symptoms associated with a medical condition is providing a substantive response about a medical topic. Under the bill's plain language, both are prohibited if a natural person providing the same information without a license could be prosecuted.
The problem is that a natural person providing the same information often cannot be prosecuted, because providing information is not the same as practicing a profession. The bill's "if taken by a natural person" framing imports a body of law that draws exactly the line the bill is trying to erase.
The non-waiver provision
Section 2(b) of the bill provides that "[a] proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system." Section 4 then requires proprietors to provide "clear, conspicuous and explicit notice" that the user is interacting with an AI system.
Read together, these provisions create an unusual structure. The bill requires disclosure but declares that disclosure does not reduce liability. In conventional consumer-protection law, disclosure is the remedy — or at least a significant mitigant. Federal securities law, state consumer-fraud statutes, the FTC Act's advertising standards, and most product-liability regimes treat adequate disclosure as the mechanism through which consumers can make informed choices. The logic is that a consumer who knows the relevant facts and proceeds anyway has assumed a different kind of risk than one who was deceived.
S7263 rejects that logic. A chatbot proprietor must tell users they are interacting with an AI system, but that disclosure does not function as a defense. The bill treats disclosure as a separate obligation, not a safe harbor. A proprietor who makes the required disclosure and whose chatbot then provides a "substantive response" about a medical condition, a legal question, or an engineering problem is liable for the same damages as a proprietor who concealed the chatbot's nature entirely.
This is a deliberate choice, and its implications run deep. The bill's non-waiver provision means that the prohibited conduct is not deception — it is information delivery. The wrong the bill targets is not that a consumer was tricked into thinking she was talking to a doctor or a lawyer. The wrong is that a machine answered her question at all, regardless of whether she knew she was talking to a machine. That framing has consequences. It means the bill is not an anti-impersonation statute, despite its legislative summary's characterization as one. A bill that prohibited chatbots from falsely representing themselves as licensed professionals would be a narrower, more conventional consumer-protection measure — and would raise fewer constitutional concerns. S7263 goes further. It prohibits the underlying information delivery, even when the consumer has full knowledge that the source is an AI system.
Others have suggested exactly this narrower approach — a false-representation statute — as a better-tailored alternative. That suggestion deserves more attention than it has received.
The access-to-justice problem
The commentary on S7263 has clustered around two poles. Practitioner-oriented analyses have flagged drafting defects — overbroad definitions, the omission of certain professions, contradictions between the disclosure and liability provisions. Policy-oriented critiques have characterized the bill as protectionism that shields incumbent professionals from competition. Both lines of criticism have force, but neither engages with the specific tension the bill creates between professional regulation and public access to information.
The numbers give that tension a shape. In 2024, the Legal Services Corporation reported that 92 percent of civil legal problems faced by low-income Americans received inadequate or no legal help. New York's own court system has long recognized the crisis in unrepresented litigants — the Office of Court Administration estimated that approximately 1.8 million New Yorkers appear in civil court proceedings without a lawyer each year. These are people who cannot afford a licensed professional and who turn to whatever information sources are available: self-help guides, court navigators, legal aid hotlines, and increasingly, AI tools.
A bill that prohibits chatbots from providing "substantive responses" about legal topics does not protect these people. It removes a source of information without replacing it with anything. The licensed professionals the bill protects are, by definition, not available to the population most likely to rely on chatbot-delivered legal information — because if those professionals were available and affordable, the access-to-justice gap would not exist.
I want to be careful not to overstate the case. AI-generated legal information can be wrong, and a person who relies on a chatbot's incorrect summary of the law may be worse off than a person who sought no information at all. The concerns about hallucination and sycophancy that I have discussed in previous posts are directly relevant here — a chatbot that confidently validates a user's mistaken understanding of her legal rights can cause concrete harm, as the Dela Torre sequence in the Nippon litigation illustrates. But the remedy for unreliable information is not the prohibition of information. It is quality standards, disclosure requirements, and — where appropriate — regulatory frameworks that distinguish between tools that purport to replace professional judgment and tools that help people understand the professional domain.
What a better framework looks like
The National Center for State Courts published a white paper in August 2025 proposing exactly this kind of framework. Rather than extending existing UPL prohibitions to AI tools, the NCSC recommended that states revise their unauthorized-practice regulations to permit vetted AI tools that meet disclosure, data-security, and transparency requirements — with regulatory sandboxes allowing controlled deployment. Bonardi and Branting's article in the Columbia Science and Technology Law Review took the argument further, proposing a capability-based certification system under which AI legal assistants would be tested against benchmark datasets and, if they meet a jurisdiction-set accuracy threshold, added to UPL exemption lists.
Both approaches share a premise that S7263 rejects: that the relevant question is not whether a machine provided information about a licensed profession, but whether the information meets a standard the public can rely on. The NCSC framework asks whether the tool is accurate, transparent, and properly disclosed. S7263 asks only whether the tool answered the question at all.
The bill's approach also sits in tension with the broader direction of AI regulation in New York. The AI Companion Models Law, which took effect in November 2025, addressed a specific, documented harm — suicidal ideation in AI interactions — with targeted safeguards. S7263 addresses an entire category of speech by prohibiting it, across fifty-plus professions, regardless of accuracy, context, or user knowledge. The contrast illustrates the difference between regulation designed around identified harms and regulation designed around categories of actors.
The disclosure question revisited
In an earlier post, I argued that the duty to inform clients about AI-related risks runs through the attorney — that Model Rules 1.1, 1.4, and 1.6 impose obligations on lawyers to counsel clients about the risks of consumer AI use, not on AI providers to prevent that use in the first place. S7263 takes the opposite structural approach. It imposes the obligation on the proprietor and makes the proprietor strictly liable (for willful violations) regardless of what the user knew or chose.
That structural choice raises a question the bill does not answer: what happens when the user wants the information? The bill's non-waiver provision means that an informed consumer who knowingly turns to a chatbot for legal or medical information — because she cannot afford a professional, because she wants to understand her situation before hiring one, or because she is conducting research — is a consumer whose choice the law overrides. The proprietor is liable not because the consumer was misled but because the machine answered.
This paternalism might be defensible if chatbot-delivered information were categorically more dangerous than the alternatives available to unrepresented individuals — legal self-help books, generic web searches, advice from friends, or no information at all. But no one has made that showing, and the bill does not require it.
Where this leaves us
The concern animating S7263 is legitimate. Chatbots can produce confident, authoritative-sounding output on medical, legal, and engineering questions while lacking the training, judgment, and accountability that professional licensure requires. The Dela Torre sequence — where ChatGPT validated a client's emotional reaction to her lawyer's advice, helped her fire her attorneys, and then drafted dozens of meritless filings — is a vivid example of the harm that can result when people treat AI output as professional counsel. The impulse to prevent that harm through regulation is sound.
But the mechanism S7263 chooses — prohibiting "substantive responses" across every licensed profession, declaring disclosure irrelevant, and imposing liability based on subject-matter coverage rather than the nature or quality of the response — sweeps in conduct that existing unauthorized-practice law does not prohibit and that the public has a strong interest in preserving. The bill does not distinguish between a chatbot that says "I am your lawyer and you should file this motion" and one that says "here is what CPLR 3211 provides and here are the grounds on which courts have granted motions to dismiss under that section." That distinction is the one that separates regulation of professional practice from suppression of public information — and any bill that aspires to do the former without the latter needs to draw it.
This post draws on New York Senate Bill 7263 (2025-2026 Regular Session); New York Education Law Section 6512 and Section 6513; New York Judiciary Law Article 15; the National Center for State Courts' white paper on modernizing UPL regulations (August 2025); Bonardi & Branting, Certifying Legal AI Assistants for Unrepresented Litigants, 26 Colum. Sci. & Tech. L. Rev. 1; and the Legal Services Corporation's 2024 report on unmet legal need. The discussion of sycophancy and judgment delegation builds on prior posts in this series.