Topics:
Search for topics or resources
Enter your search below and hit enter or click the search icon.
July 8th, 2025 | 3 min. read
As employers increasingly turn to artificial intelligence to streamline hiring, California’s Civil Rights Department (CRD) has introduced landmark regulations to ensure these technologies comply with state anti-discrimination laws.
California's Civil Rights Council, a branch of the Civil Rights Department (CRD), proposed regulations in May 2024 to address discrimination risks related to automated decision systems (ADS) used in employment [1]. On March 21, 2025, the final version of the “Employment Regulations Regarding Automated-Decision Systems” was approved by the Civil Rights Council. These regulations are expected to take effect on October 1, 2025, following formal approval by the Office of Administrative Law [2].
The CRD defines automated decision systems broadly. These include tools such as algorithms, artificial intelligence, or machine learning used to make or assist in employment decisions—including resume screening, facial analysis, voice profiling, and predictive hiring models [2]. Employers are responsible for these systems whether they are developed internally or sourced from third-party vendors, who are also directly accountable under the rules [1][2].
Disparate Impact and Proxy Discrimination
The CRD prohibits employers from using ADS in ways that disproportionately affect protected groups, including those defined by race, gender, age, disability, language proficiency, height, weight, or other protected characteristics [2][3]. Criteria that appear neutral but correlate with protected traits—such as tone of voice or educational background—may also constitute unlawful discrimination [2].
Restricted Evaluations
Tools that evaluate an individual’s health, psychological traits, or criminal history are heavily regulated and may only be used when they meet strict standards for job relevance and nondiscrimination [2][3].
Recordkeeping
Employers and vendors must retain documentation related to ADS—including design notes, datasets, applicant outcomes, and bias testing reports—separately from personal files, for at least four years from the date the record is created or the personnel action occurred [2].
Anti-Bias Testing
Anti-bias testing must include assessments of quality, efficacy, recency, scope, results, and responses to those results. Employers are required to conduct and document statistical analyses to identify disparate outcomes. These evaluations are essential to defending against potential claims under California’s Fair Employment and Housing Act (FEHA) [2][3].
Human Oversight
All ADS-based decisions must involve meaningful human review. Employers remain fully responsible for employment decisions and cannot shift legal liability to an algorithm or system. The presence of AI does not reduce an employer’s obligations under civil rights laws [2].
Transparency & Appeals
Proposed legislation such as Senate Bill 7 may require employers to notify applicants when ADS is used. Employers should clearly inform applicants when ADS will be used, how they can request accommodations, and how to contest automated decisions [5].
Shared Liability
Both employers and third-party vendors can be held liable if the use of ADS results in discrimination. These regulations emphasize joint accountability for the impact of hiring technologies [2][3].
Enforcement Tools
The CRD may investigate potential violations and impose penalties under FEHA, including injunctive relief, back pay, reinstatement, and monetary damages. The agency is also expanding its capacity to audit ADS practices across industries [3].
A June 2025 study revealed that leading AI hiring systems—including those built on large language models—exhibited measurable bias, with significantly different recommendation rates based on race and gender, even when résumés were equivalent [4]. These findings underscore how subtle design choices or biased datasets can result in unintended discrimination, even in the absence of explicitly discriminatory criteria.
Conduct an inventory of all ADS tools in use and their application points.
Run regular adverse impact analyses using sound statistical methods.
Maintain comprehensive records of data inputs, outcomes, and decisions.
Ensure human review is integrated throughout the hiring process.
Hold vendors accountable by including compliance clauses in contracts and requiring transparency into their tools.
California’s CRD regulations signal a major shift in how employers must manage AI-powered hiring practices. By emphasizing transparency, fairness, and accountability, the rules aim to balance technological innovation with civil rights protections. As artificial intelligence becomes more embedded in the workplace, ensuring compliance with these standards is critical for both legal protection and ethical hiring.
![]()
|
Sign Up for Our Upcoming Webinar |
This article is not intended to be exhaustive nor should any discussion or opinions be construed as legal advice. Readers should contact legal counsel for legal advice.
Works Cited
Topics: