All posts

Colorado Wants to Regulate Your AI — and You Are the Deployer

Colorado Senate Bill 24-205 — the Colorado AI Act — takes effect on June 30, 2026. Most of the attention surrounding it has focused on the wrong audience. xAI's recent federal lawsuit challenging the statute on constitutional grounds, the working group proposal to replace it with a lighter-touch framework, and the broader political debate about whether states should regulate AI at all have kept the spotlight on technology companies and their obligations as developers. But SB 24-205 does not regulate only developers. It imposes a parallel set of obligations on "deployers" — any person doing business in Colorado that uses a high-risk AI system. And the statute's definition of "consequential decision" explicitly includes decisions affecting "a legal service." Colo. Rev. Stat. § 6-1-1701(3)(h).

If you are a lawyer who uses AI tools that contribute to decisions about how you deliver legal services to Colorado residents, the Colorado AI Act treats you as a deployer. The obligations that follow are specific, documented, and enforceable at $20,000 per violation.

What makes an AI system "high-risk"

The threshold question under SB 24-205 is whether the AI tool qualifies as a "high-risk artificial intelligence system." The statute defines this as any AI system that, when deployed, "makes, or is a substantial factor in making, a consequential decision." § 6-1-1701(9)(a). A "consequential decision" is one that has a "material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of" one of eight categories of services — including education, employment, financial services, healthcare, housing, insurance, essential government services, and legal services. § 6-1-1701(3).

The term "substantial factor" is defined broadly. It includes any use of an AI system to "generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision concerning the consumer." § 6-1-1701(11)(b). That language sweeps in more than automated decision-making in the narrow sense. A lawyer who uses an AI tool to generate a risk assessment that informs whether to take a case, to draft a memorandum that shapes a settlement recommendation, or to analyze documents in a manner that determines which facts receive attention in litigation is deploying a system that functions as a substantial factor in a consequential decision about a legal service.

The statute carves out systems intended to perform "a narrow procedural task," and those intended to "detect decision-making patterns or deviations from prior decision-making patterns" without replacing or influencing a completed human assessment. § 6-1-1701(9)(b). Spell-checking, document formatting, and basic search functionality fall outside the definition. But anything that generates substantive content, analysis, or recommendations feeding into a decision about a client's legal matter is squarely within scope.

What the statute requires of deployers

The deployer obligations under § 6-1-1703 are not suggestions. They form a structured compliance regime with several components.

First, deployers must implement a "risk management policy and program" specifying the principles, processes, and personnel used to identify, document, and mitigate risks of algorithmic discrimination. § 6-1-1703(2)(a). The statute ties the reasonableness of this program to the NIST AI Risk Management Framework, ISO/IEC 42001, or another nationally or internationally recognized risk management framework that the Attorney General may designate. For a law firm, this means a written policy describing how the firm evaluates AI tools for discriminatory risk before and during deployment — not a vague reference to "responsible AI use" in a technology policy, but a documented program with identified personnel and iterative review.

Second, deployers must complete impact assessments for each high-risk AI system, initially and at least annually thereafter, and within ninety days of any intentional and substantial modification. § 6-1-1703(3). These assessments must include, among other things, a statement of the system's purpose and intended use, an analysis of discrimination risks and mitigation steps, a description of data inputs and outputs, performance metrics, transparency measures, and a description of post-deployment monitoring. The assessments must be retained for at least three years after the system's final deployment. § 6-1-1703(3)(f).

Third, deployers must notify consumers — before a consequential decision is made — that a high-risk AI system is being used. § 6-1-1703(4)(a). The notification must include a plain-language description of the system, the nature of the consequential decision, contact information, instructions for accessing the deployer's public disclosure, and information about the consumer's opt-out rights under the Colorado Privacy Act. If the consequential decision is adverse, the deployer must provide a statement of the principal reasons for the decision, including how the AI system contributed to it, what data it processed, and the sources of that data. § 6-1-1703(4)(b). The deployer must also offer the consumer an opportunity to correct inaccurate personal data and to appeal the decision with human review. § 6-1-1703(4)(b)(II)–(III).

Fourth, deployers must publish and periodically update a website statement summarizing the types of high-risk AI systems they deploy, how they manage discrimination risks, and the nature, source, and extent of information collected and used. § 6-1-1703(5).

Why lawyers have not been paying attention

The commentary on SB 24-205 has focused overwhelmingly on technology companies, HR departments using AI hiring tools, financial institutions running automated underwriting, and healthcare organizations deploying clinical decision-support systems. The FPF policy brief, the Brownstein Hyatt client alert, and the Fisher Phillips action steps all frame the Act's deployer obligations through the lens of employers and financial services. None of them devote sustained attention to law firms.

That focus is understandable — those sectors involve large-scale automated decisions with obvious discriminatory potential — but the statute lists legal services alongside education, employment, and healthcare, and the text draws no distinction in the obligations it assigns. A firm that uses AI to screen potential clients, evaluate claims, generate legal analysis informing case strategy, or assist in determining the terms of a settlement offer is making — or substantially contributing to — consequential decisions about legal services for consumers. The fact that a lawyer exercises independent professional judgment over the final decision does not remove the AI system from the statute's scope. The "substantial factor" definition captures the input, not just the output. If the AI-generated analysis is "used as a basis" for the decision, the system qualifies regardless of how much human review follows.

The small-business exemption offers limited relief. Deployers with fewer than fifty full-time equivalent employees are exempt from the risk management, impact assessment, and public disclosure obligations — but only if they also do not use their own data to train the high-risk AI system. § 6-1-1703(6). A solo practitioner or small firm using a commercial LLM without custom training data would likely qualify. A firm of any meaningful size would not.

The legislative uncertainty problem

The picture is further complicated by the AI Policy Work Group that Governor Polis convened in October 2025. On March 17, 2026, the working group released a proposed framework that would substantially replace SB 24-205. The proposal shifts from an audit-and-risk-management model to a transparency-and-notice model, eliminating the mandatory bias audits and impact assessments in favor of disclosure and correction rights. If enacted, the revised framework would push the effective date to January 1, 2027.

But as of this writing, no legislator has introduced a bill incorporating the working group's proposal, and the 2026 legislative session is scheduled to end on May 13. The xAI complaint, filed April 9, notes this explicitly: "the General Assembly has not introduced a bill to amend the law during the current legislative session, which ends the second week of May of this year." If the session closes without action, SB 24-205 takes effect as written on June 30 — with all of its deployer obligations intact.

The uncertainty itself creates a compliance problem. A firm that waits for legislative clarity may find itself with six weeks between the session's close and the statute's effective date to build a risk management program, complete impact assessments, draft consumer notifications, and publish a website disclosure. That is not enough time.

The disclosure obligation you already have

The CAIA's consumer notification requirement — telling clients that an AI system contributed to a consequential decision about their legal matter — reads as a new regulatory burden. For lawyers, it is less new than it appears.

Model Rule 1.4(a)(2) requires a lawyer to "reasonably consult with the client about the means by which the client's objectives are to be accomplished." Colorado RPC 1.4(a)(2) tracks the Model Rule verbatim. When a lawyer uses a generative AI tool to produce analysis that informs case strategy, settlement recommendations, or litigation decisions, the AI tool is part of the "means" — and the question of whether the client should know about it is already an ethical obligation, independent of any statute.

ABA Formal Opinion 512 addressed this directly. The Opinion identified several circumstances in which Rule 1.4 requires disclosure of AI use, including when a lawyer uses generative AI to inform important decisions about the representation. The Opinion stopped short of a blanket disclosure rule, treating the question as a fact-specific inquiry that depends on the tool's role in the representation, the significance of the task, how the tool processes client information, and whether knowledge of the tool's use would affect the client's evaluation of the lawyer's work. But the direction is clear: when AI is shaping substantive legal analysis rather than performing ministerial tasks, the client has a right to know.

What the Colorado AI Act does is formalize that obligation in statutory terms — through the consumer notification requirements described above under § 6-1-1703(4), which mandate pre-decision disclosure, plain-language descriptions, and, for adverse decisions, an explanation of the AI system's role. A lawyer who already satisfies Rule 1.4's consultation obligation by explaining to a client how AI tools contributed to a recommendation has taken a significant step toward satisfying the CAIA's deployer notification requirement. A lawyer who ignores Rule 1.4 — who uses AI to generate a risk assessment that shapes a settlement recommendation without telling the client — is now exposed on two fronts: a potential disciplinary complaint under the professional conduct rules and a potential enforcement action under the CAIA, with each violation carrying a $20,000 penalty.

Colorado's Supreme Court has not yet amended Rule 1.4 to address AI specifically — the Colorado Lawyer has noted that the Court is actively examining potential amendments — but the existing rule text is broad enough to require disclosure in many AI-assisted contexts without any amendment at all.

What lawyers should be doing now

I want to be careful not to overstate the scope of what SB 24-205 requires of a typical law firm. A lawyer who uses an LLM to draft a demand letter that she then reviews, edits, and sends under her own judgment may fall outside the statute's reach if the AI is performing a "narrow procedural task" rather than making or substantially contributing to a consequential decision. The line between a tool that formats prose and a tool that shapes substantive analysis is blurry, and the Attorney General has not yet issued guidance clarifying where it falls.

But the regulatory trajectory extends well beyond Colorado. Illinois has amended its Human Rights Act to regulate AI-driven employment decisions. Texas has enacted a governance framework that, while narrower than Colorado's, imposes categorical restrictions on certain AI deployments. Connecticut has introduced legislation tracking SB 24-205's structure. And the EU AI Act — which took effect in stages beginning in August 2025 — employs a similar risk-based framework with deployer obligations for high-risk systems. The regulatory model that treats the entity using an AI system as responsible for its consequences — regardless of who built the model — is becoming the default approach. Lawyers who encounter it first through a Colorado enforcement action will wish they had encountered it sooner through their own compliance planning.

The remaining obligations the statute imposes — risk management programs, impact assessments, public disclosures — lack direct analogs in the professional conduct rules but build on duties the profession already recognizes in principle. Model Rule 5.3 requires reasonable supervision of nonlawyer assistants, including AI tools. The duty of competence under Model Rule 1.1 — as amended by Comment 8 to include technological competence — requires lawyers to understand the benefits and risks of the technology they use. ABA Formal Opinion 512 requires lawyers to understand how an AI tool works before relying on it. What the Colorado AI Act adds to these existing duties is a specific enforcement mechanism: documented compliance obligations, prescribed timelines, a public disclosure requirement, and per-violation penalties enforced by the Attorney General rather than by bar disciplinary authorities operating on a case-by-case basis.

The profession has spent two years debating whether AI output is protected speech, whether chatbots practice law, and whether hallucinated citations warrant sanctions — all worth examining, but none of them the regulatory question that will affect the most lawyers first. That question is simpler and less dramatic: when you use an AI system to inform a decision about a client's legal matter, are you prepared to document how you evaluated it, disclose that you used it, and explain what it did?


This post draws on the text of Colorado Senate Bill 24-205 (signed May 17, 2024; effective date delayed to June 30, 2026); the Future of Privacy Forum policy brief on the Colorado AI Act (July 2024); the Colorado AI Policy Work Group's proposed framework (March 17, 2026); the ABA Model Rules of Professional Conduct and Formal Opinion 512; and Colorado RPC 1.4 as discussed in the Colorado Bar Association's survey of AI and professional conduct obligations. The deployer-obligations analysis builds on the data-handling and supervision frameworks discussed in prior posts on confidentiality, API compliance, judgment delegation, and supervisory duties.