[US] Colorado will be first state to enact comprehensive AI legislation

[US] Colorado will be first state to enact comprehensive AI legislation
30 May 2024

In the US, Colorado is set to become the first state to enact a comprehensive law addressing the use of artificial intelligence (AI) in employment and other critical areas, Forbes reports.

On May 8, the Colorado state legislature passed Senate Bill 24-205 (SB205). It now awaits the signature of Governor Jared Polis. If signed into law, the legislation will take full effect in 2026. 

SB205 is reportedly intended to prevent algorithmic discrimination and requires developers and users of high-risk AI systems to adopt rigorous compliance measures.

Definition and Scope of AI Under the Act

SB205 defines "high-risk artificial intelligence systems" as machine-based algorithms that significantly influence decisions in areas such as:

  • Employment and employment opportunities
  • Education enrollment and opportunities
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

These AI systems are considered high-risk if they make consequential decisions impacting individuals or groups -  or substantially contribute to them - potentially leading to differential treatment based on protected classifications such as age, disability, race, religion, or sex.

Those Affected

The Act applies to both developers and deployers of high-risk AI systems and defines them in the following way:

Developers: Any entity in Colorado that develops or significantly modifies an AI system.

Deployers: Any entity in Colorado that uses a high-risk AI system.

Small businesses with less than 50 full-time employees may reportedly be exempt from some of these requirements.

Compliance Obligations

If signed into law, affected businesses must adhere to several stringent requirements from February 1, 2026.

Developers will be required to:

  • Provide extensive information to deployers, including known harmful uses and data summaries.
  • Publish a public statement on their website detailing the types of AI systems developed and their risk management strategies.
  • Disclose all known risks of algorithmic discrimination to the attorney general.

Deployers will be required to:

  • Implement and regularly review a comprehensive risk management policy.
  • Conduct impact assessments of AI systems annually and within 90 days of significant modifications.
  • Notify consumers when a high-risk AI system will be used to make consequential decisions, including detailed disclosures on their website.
  • Ensure consumers know they are interacting with an AI system unless it is obvious to a reasonable person.

General Requirements:

  • Both developers and deployers must use reasonable care to avoid algorithmic discrimination, with a rebuttable presumption of compliance if they follow the rules.
  • Deployers must notify the attorney general of any discriminatory outcomes detected by their AI systems.
  • Developers must inform the attorney general and all known deployers of any new risks of discrimination discovered.

Consumer Rights and Notifications

According to Forbes, the Act mandates that businesses using high-risk AI systems provide detailed notices to individuals affected by these systems, including:

  • The purpose and nature of the AI system.
  • The type of decision being influenced by the AI.
  • The right to opt out of profiling in decisions with significant legal effects.
  • Contact information and details on accessing the public statement on AI use.

The Colorado attorney general will reportedly have exclusive authority to enforce SB205, treating violations as unfair and deceptive trade practices. Under this law there is no private right of action, however, businesses can assert an affirmative defence if they discover and cure violations through feedback or internal review processes.

Forbes advises organisations to develop robust AI risk management programs, conduct regular impact assessments and provide transparent disclosures to comply with this landmark legislation. It also warns employers outside Colorado to note that similar laws are being considered in other states, indicating a nationwide trend toward stricter AI regulations.


Source: Forbes

(Link via original reporting)

In the US, Colorado is set to become the first state to enact a comprehensive law addressing the use of artificial intelligence (AI) in employment and other critical areas, Forbes reports.

On May 8, the Colorado state legislature passed Senate Bill 24-205 (SB205). It now awaits the signature of Governor Jared Polis. If signed into law, the legislation will take full effect in 2026. 

SB205 is reportedly intended to prevent algorithmic discrimination and requires developers and users of high-risk AI systems to adopt rigorous compliance measures.

Definition and Scope of AI Under the Act

SB205 defines "high-risk artificial intelligence systems" as machine-based algorithms that significantly influence decisions in areas such as:

  • Employment and employment opportunities
  • Education enrollment and opportunities
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

These AI systems are considered high-risk if they make consequential decisions impacting individuals or groups -  or substantially contribute to them - potentially leading to differential treatment based on protected classifications such as age, disability, race, religion, or sex.

Those Affected

The Act applies to both developers and deployers of high-risk AI systems and defines them in the following way:

Developers: Any entity in Colorado that develops or significantly modifies an AI system.

Deployers: Any entity in Colorado that uses a high-risk AI system.

Small businesses with less than 50 full-time employees may reportedly be exempt from some of these requirements.

Compliance Obligations

If signed into law, affected businesses must adhere to several stringent requirements from February 1, 2026.

Developers will be required to:

  • Provide extensive information to deployers, including known harmful uses and data summaries.
  • Publish a public statement on their website detailing the types of AI systems developed and their risk management strategies.
  • Disclose all known risks of algorithmic discrimination to the attorney general.

Deployers will be required to:

  • Implement and regularly review a comprehensive risk management policy.
  • Conduct impact assessments of AI systems annually and within 90 days of significant modifications.
  • Notify consumers when a high-risk AI system will be used to make consequential decisions, including detailed disclosures on their website.
  • Ensure consumers know they are interacting with an AI system unless it is obvious to a reasonable person.

General Requirements:

  • Both developers and deployers must use reasonable care to avoid algorithmic discrimination, with a rebuttable presumption of compliance if they follow the rules.
  • Deployers must notify the attorney general of any discriminatory outcomes detected by their AI systems.
  • Developers must inform the attorney general and all known deployers of any new risks of discrimination discovered.

Consumer Rights and Notifications

According to Forbes, the Act mandates that businesses using high-risk AI systems provide detailed notices to individuals affected by these systems, including:

  • The purpose and nature of the AI system.
  • The type of decision being influenced by the AI.
  • The right to opt out of profiling in decisions with significant legal effects.
  • Contact information and details on accessing the public statement on AI use.

The Colorado attorney general will reportedly have exclusive authority to enforce SB205, treating violations as unfair and deceptive trade practices. Under this law there is no private right of action, however, businesses can assert an affirmative defence if they discover and cure violations through feedback or internal review processes.

Forbes advises organisations to develop robust AI risk management programs, conduct regular impact assessments and provide transparent disclosures to comply with this landmark legislation. It also warns employers outside Colorado to note that similar laws are being considered in other states, indicating a nationwide trend toward stricter AI regulations.


Source: Forbes

(Link via original reporting)