Artificial Intelligence Webinar

Artificial Intelligence: Regulatory Overview & Practical Considerations

As artificial intelligence (AI) continues to revolutionize industries, governments worldwide are racing to establish regulatory frameworks that ensure its responsible use. These new laws aim to address risks like algorithmic discrimination, data privacy concerns, and transparency in decision-making. In a recent webinar hosted by KO Law’s Chase Millea, Chris Achatz, and Erin Locker, they shared key insights about navigating this complex regulatory landscape, along with practical strategies for organizations looking to deploy or integrate AI systems effectively. Check out the recap and the full webinar recording below.

The AI Regulatory Landscape

In the United States, Colorado, California, and Utah are leading the charge with AI-specific legislation. Colorado’s AI Act was the first of its kind in the U.S., focusing on preventing algorithmic discrimination and regulating “high-risk” systems, such as those used in decisions related to education, employment, financial services, essential government services, healthcare, housing, insurance, or legal services. California has taken a broader approach with a series of laws addressing generative AI transparency, training data disclosures, and industry-specific regulations. For example, healthcare providers using AI to communicate clinical information must disclose its use to patients and provide an option for human interaction. Utah, meanwhile, emphasizes transparency by requiring businesses and licensed professionals to disclose AI use during interactions with clients or consumers.

The European Union is setting the global benchmark with its comprehensive EU AI Act, which adopts a risk-based framework. AI systems are categorized into four tiers of risk. Minimal risk systems, such as spam filters, require no oversight, while limited risk systems like chatbots must disclose AI involvement to users. High-risk systems, including AI used in education, employment, healthcare, finance, critical infrastructure, law enforcement, or immigration management, are subject to stringent documentation, risk assessments, and compliance measures. At the highest level, unacceptable risk systems—such as those used for social scoring, biometric categorization to infer sensitive data, or subliminal, manipulative, or deceptive techniques that impair decision making—are outright banned.

Key Requirements

AI regulations impose varying requirements depending on risk classification and the organization’s role as a developer, deployer, or user of AI systems:

  1. Transparency Obligations: Transparency is central to most AI regulations. Businesses must clearly disclose when AI is involved, whether during customer interactions (e.g., chatbots) or within AI-generated outputs (e.g., images, content, or decisions). Generative AI developers are also required to document training data, detailing the datasets used to build their systems.
  2. Risk Assessments: For high-risk AI systems, organizations must conduct comprehensive risk assessments to evaluate potential harms, including algorithmic bias, discrimination, and adverse impacts on individuals. Some laws require ongoing assessments throughout the lifecycle of the system. The NIST Artificial Intelligence Risk Management Framework and the ISO/IEC 42001:2023 Standard are valuable tools for conducting these evaluations.
  3. Audits and Testing: Regular audits, including post-deployment evaluations, are necessary to ensure AI systems perform as intended and comply with legal requirements. For instance, New York City’s AI employment law mandates bias audits for tools used in hiring decisions.
  4. Governance Programs: AI governance programs are becoming essential. These include assigning accountability to specific teams or individuals, maintaining detailed documentation on AI use, and establishing internal policies to manage risks. Some regulations, such as Colorado’s AI Act, recommend aligning with frameworks like NIST or ISO to ensure compliance.
  5. Data Privacy and Security: When personal data is used in AI systems, compliance with privacy laws (e.g., GDPR, CCPA) is critical. This includes securing input data, protecting user rights (such as access or deletion), and limiting data retention.
  6. Vendor Collaboration: Developers must assist deployers by providing necessary documentation and technical information to meet compliance requirements. This transparency ensures downstream users can also fulfill legal obligations.

Practical Considerations for Organizations

For organizations looking to adopt AI tools, the following steps are essential for ensuring both compliance and effective integration:

  1. Evaluate Technical Compatibility: Before adopting an AI system, assess its compatibility with your existing technology stack. Determine whether the tool can scale with your needs and whether it can integrate seamlessly using APIs or other methods.
  2. Clarify Intellectual Property (IP) Rights: Ownership of AI-generated outputs (e.g., content, code) can be a gray area. Ensure contracts clearly outline whether your organization retains ownership or merely licenses the outputs. This is especially important if the AI outputs will be incorporated into products or sold to customers.
  3. Ensure Data Privacy Compliance: Identify whether personal data is being used in the AI system and confirm compliance with privacy laws. Contracts with vendors should address who is responsible for data security and for responding to user requests (e.g., deletion or correction of data).
  4. Assess Vendor Reliability: Choose reputable AI vendors with a strong track record of reliability and compliance. Verify whether the vendor can meet service-level agreements (SLAs) and sustain long-term support for the tool.
  5. Define Liability and Indemnification: Include clear terms in contracts about liability in case of data breaches, intellectual property infringement, or system malfunctions. Ensure vendors carry adequate insurance, such as technology errors and omissions or cyber liability coverage.
  6. Establish Governance and Ethical Guidelines: Proactively address algorithmic bias, transparency, and fairness. Create internal policies to evaluate the ethical implications of AI use, particularly in high-risk applications like educating, hiring, lending, or providing healthcare.
  7. Monitor and Audit: Regularly test and audit AI systems to ensure they perform as expected and meet evolving legal requirements. For high-risk systems, document all assessments and audits to demonstrate compliance during regulatory reviews.

Why It Matters

As AI systems become more integrated into everyday operations, regulation of AI systems is rapidly evolving to address ethical, societal, and legal concerns. The emergence of more regulations highlights the need for businesses to prioritize compliance, transparency, and risk management. Organizations that proactively implement governance frameworks and ethical practices will be better positioned to maintain trust, avoid legal pitfalls, and thrive in this new landscape.

Tune in the full webinar recording below for much more detail: 

Note: The Colorado Office of CLJE has accredited the webinar recording as a home study continuing legal education program. Colorado attorneys and judges who attend this entire seminar may claim 1 general CLE Credit by self-reporting Course ID 854453 into their Online CLE Transcript.

Looking for a new partner?

We are changing the status quo in the legal industry one client at a time. Why not be next?

Related Articles