With no federal plan in place to guide AI regulation or oversight, states have begun tackling the matter by drawing from the European Union’s (EU) regulatory framework, which establishes a risk-based approach for regulating AI systems and activities.
In May, Governor Jared Polis signed the Colorado AI Act, which places new requirements on AI systems. The new law, which takes effect in 2026, impacts AI developers and deployers – including businesses and employers that require others to interact with AI systems – that do business in Colorado. The law limits its scope to high-risk systems that make, or contribute to, consequential decisions. This could include education, housing, health, financial, and legal decisions, as well as many more.
The Colorado law prohibits algorithmic discrimination and requires developers and deployers to protect consumers from the risk of discrimination in their AI systems. Deployers must create an annual impact assessment of their AI systems, and they must have a risk management system in place for deploying high-risk systems. Many entities are exempt from the law, including employers with fewer than 50 employees.
California Looking at the European Union AI Act
California regulators and legislators have been meeting with the EU officials who developed the EU AI Act as the state develops its AI regulatory plan.
The European Union AI Act prohibits the use of emotional recognition technology in educational institutions and workplaces, bans social credit scoring systems that incentivize or penalize specific behaviors, and restricts the implementation of predictive policing in specific scenarios. It also assigns high-risk designations to AI systems used in healthcare, employment processes, and the administration of government benefits.
California legislators have now proposed dozens of bills to regulate a range of aspects of AI, from enforced transparency of companies to their users on the operational methods of their automated systems, to the prevention of discrimination.
California is also focusing on mandatory testing and assessment of high-risk AI systems, enhanced standards of transparency for AI-generated content, and an overall risk-based approach to regulation, including a proposal for a new agency specifically to oversee and regulate generative AI.
AI Employment Regulation: Washington, D.C. & New York City
In the District of Columbia, the City Council last year reintroduced legislation to prevent discrimination by algorithms in employment decisions. The Stop Discrimination by Algorithms Act of 2023 would prohibit algorithmic discrimination by employers and require service providers to ensure that their AI tools are compliant with the law.
Employers would be required to conduct an annual discrimination audit, which must be conducted by a third party and has a reporting requirement. Employers would also have a poster requirement informing employees about the law, as well as a required pop-up notice on certain systems. City lawmakers are still considering the bill.
The District reintroduced its proposal after New York City took action to prevent algorithmic discrimination.
New York City’s AI in hiring tools law last year took effect and mandates that employers inform job candidates beforehand if they use automated hiring systems and conduct a yearly bias audit of these systems. Any tool that uses machine learning or AI with data analytics to evaluate candidates for employment is defined as an automated hiring tool under this law.
Employers under the new law must notify job candidates, and employees based in the city who are candidates for promotions, at least ten business days before screening them with automated hiring tools.
An external auditor must perform a yearly bias audit of the hiring tool to evaluate whether it shows any discriminatory biases towards protected classes such as race, sex, and ethnicity. Employers must provide a summary of the audit on their website, along with information about the data they collect, its source, and how long they will retain it.
Employers found in violation of the law would face penalties of $500 to $1,500 per day that the system was in use.
Utah Artificial Intelligence Policy Act
Utah’s governor this year signed into law the new AI policy act that establishes the Office of Artificial Intelligence Policy, an AI analysis program, and creates liability standards for not disclosing generative AI use.
The new law requires any entity providing services in a regulated occupation to disclose, when prompted, to the use of generative AI for interactions by text, audio, or visual communication. It specifically requires disclosure in communications with AI and non-scripted chat bots for health care purposes.
While not focused on employment matters, the new law could be applied in workplace settings involving employee health and wellness programs, as well as other human resources interactions with employees.
AI in Employment Regulation: Your Rights
Absent a federal framework, states are looking to the international community and responding to constituent concerns for regulating AI in the workplace and throughout all aspects of consumer interaction.
As employers adopt AI in every system, especially in human resources roles, such as hiring, promotions, and terminations, the employer is ultimately responsible and liable for discriminatory conduct regardless of who is making employment decisions, whether it is a human, algorithm, or service provider.
In enforcing employment rights, every employment situation is unique. Remote work, multi-jurisdictional employers, and employment agreements governed by state laws different from an employee’s state of residence create complex employment matters that require an expert to understand.
In any situation that involves your employment rights, contact Potomac Legal Group to review your matter. Our attorneys are experts in complex employment matters, and we are closely following the regulatory changes impacting employment and the fast-pace of technology change and AI adoption.