Overview of Current U.S. Comprehensive AI Laws

As more businesses look to artificial intelligence (AI) to enhance their operations, several states have introduced or passed comprehensive legislation to regulate AI and to ensure consumer protection in the use of AI, particularly focusing on how companies are using personal data to train and develop their AI and machine learning models and algorithms. Below is a short overview of the latest legislation passed in Utah and Colorado and what companies can do now to prepare for these and other upcoming AI-specific state laws expected in the near future.

UTAH
The Utah Artificial Intelligence Policy Act (UAIP), which took effect May 1, 2024, focuses on consumer protection and transparency in the use of generative AI systems. Key provisions include disclosure requirements for regulated occupations, like healthcare, and disclosure requirements for non-regulated businesses when directly asked by consumers if they are interacting with AI. The UAIP also imposes penalties and enforcement for violations and establishes the Office of Artificial Intelligence Policy and the AI Learning Laboratory Program to encourage innovation and responsible AI development under controlled conditions.

Scope
The UAIP applies to persons who use, prompt, or otherwise cause generative artificial intelligence to interact with a Utah consumer. “Generative artificial intelligence” is defined in the UAIP as an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.

Disclosure Requirements
Any person using generative AI must, if asked by the consumer with whom the generative AI interacts, disclose to the consumer that they are interacting with generative AI (and not a human).

Any person who provides the services of a “regulated occupation” must prominently disclose to consumers that they are interacting with generative AI in the provision of regulated services. This disclosure must be provided verbally or electronically at the start of the exchange.

A “regulated occupation” is defined as an occupation regulated by the Utah Department of Commerce that requires a person to obtain a license or state certification to practice the occupation (such as doctors or other professionals).

Fines and Penalties
For violations of the law, the Utah Division of Consumer Protection may impose administrative fines of up to $2,500 per violation and may seek injunctions or disgorgement of profits earned in violation of the UAIP. The Utah Attorney General may also seek up to $5,000 per violation of the UAIP. There is no private right of action under the UAIP.

COLORADO
Colorado Senate Bill 205 (SB205), which was signed into law in May 2024, will become effective on February 1, 2026. SB205 aims to address and mitigate the risks associated with “high-risk” AI systems and to enhance transparency, accountability, and consumer protection in the use of AI systems in Colorado. Key provisions include reasonable care requirements for developers and deployers of high-risk AI systems; impact assessments annually and within 90 days of substantial modifications; consumer notifications when a high-risk AI system makes consequential decisions about them; and documentation and transparency on AI systems and risk management practices.

Scope
SB205 applies to developers and deployers of artificial intelligence systems, with special restrictions on “high-risk” systems. “High-risk” artificial intelligence systems are defined to include AI systems that make a material legal or similarly significant effect on the provision or denial of certain important services to a Colorado resident, such as education, healthcare services, financial services, housing, insurance, essential government services, or legal services.

The law includes limited exceptions to certain requirements for small deployers (less than 50 full-time equivalent employees) who do not use the deployer’s own data to train the high-risk system, who limit uses of the high-risk AI systems to those previously disclosed, and who share the developer’s impact assessment with consumers interacting with the system. Other general exemptions may also apply.

Disclosure Requirements
In addition to general obligations to inform individuals that they are interacting with an AI system, deployers of high-risk AI systems must notify Colorado residents that they are using a high-risk AI system to make (or be a substantial factor in making) a consequential decision about the individual before the decision is made. The notice must include, among other details, the purpose of the high-risk AI system and the nature of the consequential decision being made by the system, the deployer’s contact information, and a description, in plain language, of the high-risk AI system. High-risk AI systems used to make adverse consequential decisions must provide additional disclosures.

Deployers and developers must also, subject to certain exceptions, make additional statements on the deployer’s/developer’s website, such as – in the case of deployers – an explanation of the high-risk AI systems being used, how risks of algorithmic discrimination are managed, the information collected and used by the deployer.

Developers must provide a packet of disclosures to deployers of their systems, including types of data used for training, mitigation measures for algorithmic discrimination, known limitations and risks of the AI system, foreseeable uses and harmful uses of the AI system, and the purpose, benefits, and intended uses of the AI system.

Other Requirements
Both developers and deployers must use “reasonable care” to protect consumers from known or foreseeable risks of algorithmic discrimination. Deployers of high-risk AI systems must implement and regularly review and update risk management policies and programs. Deployers and third parties contracted by deployers must also complete impact assessments and monitoring for high-risk AI systems annually and within 90 days of any significant modifications.

Fines and Penalties
Violations of the law are considered deceptive trade practices under the Colorado Consumer Protection Act and enforcement of penalties for non-compliance, which may include fines, injunctive relief, or other remedies, will be handled by the Colorado Attorney General. There is no private right of action under SB205.

THE BOTTOM LINE
Other states, including Connecticut and California, have proposed AI-specific legislation that is also gaining traction.

In an effort to prepare for state and potential federal legislation, companies should consider taking the following steps:

  • Review and inventory existing AI tools and determine whether any company data (including personal data) is used for training.
  • Determine the applicability of current and upcoming state AI legislation and take steps to comply with requirements (including necessary disclosures).
  • Conduct assessments of current AI tools and systems and monitor the accuracy and quality of AI outputs.
  • Establish acceptable AI usage policies and conduct employee training on proper use of AI.

As efforts continue to address the unique opportunities and challenges posed by AI, companies should stay up to date on state and federal legislation and proactively implement AI policies and best practices.

Erin Locker is a commercial partner whose practice focuses on privacy, cybersecurity and data protection. She helps companies at every stage navigate the rapidly evolving landscape of global privacy regulation and develop strategic approaches to compliance. Erin counsels clients on a range of data privacy and protection issues involving product design and development, digital marketing and advertising, technology transactions, and cyber risk management and preparedness.

Looking for a new partner?

We are changing the status quo in the legal industry one client at a time. Why not be next?

Related Articles