New regulations from California’s Civil Rights Council, set to take effect on October 1, 2025, formally extend anti-discrimination protections under the Fair Employment and Housing Act (FEHA) to cover decisions made or facilitated by automated-decision systems (ADS) in employment contexts [1], [2]. This move creates a significant new area of legal and operational risk for organizations, particularly those leveraging AI for hiring, promotion, or performance management. For security and IT professionals, this is not merely an HR policy update; it mandates new data governance, auditing, and recordkeeping requirements that intersect directly with system administration, data security, and third-party risk management.
The core of the regulation prohibits employers from using an ADS or selection criteria that discriminates against applicants or employees based on protected characteristics like race, gender, age, or disability [3], [9]. Critically, liability can be established through disparate impact without proof of intent, and it extends to decisions made by third-party vendors, who may be considered an “employer” under these rules [2], [4], [5]. This transforms the use of off-the-shelf AI hiring tools from a simple procurement decision into a potential source of corporate liability, requiring technical teams to engage in vendor assessment and continuous monitoring.
Defining the Scope: What Constitutes an Automated-Decision System?
The definition of an ADS is intentionally broad, covering any computational process that makes or facilitates human decision-making regarding an employment benefit [1], [2]. This includes tools derived from machine learning, algorithms, statistics, or other data processing. For technical teams, the first compliance step is a comprehensive audit and inventory of all such systems. Examples specified in the guidance are highly relevant to modern hiring practices: resume screeners that search for specific terms or patterns; tools that direct job advertisements to targeted demographic groups; computer-based tests or games assessing skills or cultural fit; and software that analyzes video or audio interviews for factors like facial expression or tone of voice [1], [8]. Basic software like word processors or spreadsheets is excluded, but only if it is not used to inform an employment decision [2]. The line between a tool and a decision-making system is therefore defined by its use case, not its fundamental technology.
Technical and Administrative Compliance Requirements
While the regulations do not mandate a specific bias audit framework, they strongly emphasize its value, and evidence of such proactive efforts—or the lack thereof—will be a major factor in any legal claim [1], [5]. From a technical standpoint, this requires establishing processes for regular bias assessments, which involve analyzing the ADS’s input data, model outputs, and decision outcomes for disparate impact across protected groups. The quality, scope, recency, and results of these tests, along with the employer’s response to any findings, will be scrutinized [2], [8]. Furthermore, the regulations extend the mandatory retention period for all personnel and employment records from two years to four years. This explicitly includes “automated-decision system data,” meaning any data used in or generated by the ADS [2], [5]. This has direct implications for data storage policy, backup systems, and data lifecycle management within IT departments.
Other key requirements include providing reasonable accommodations for disabilities and religious practices in the use of ADS, which necessitates building accessible interfaces and clear request mechanisms [2]. The regulations also implicitly require meaningful human oversight of AI-facilitated decisions, a control that must be designed into the workflow [1], [4]. Specific practices are called out as high-risk: tools that rank candidates based on schedule availability may discriminate on religious or disability grounds; systems measuring dexterity or reaction time may disadvantage individuals with disabilities; and analysis of physical characteristics or tone may lead to discrimination based on race, gender, or national origin [2], [4].
Broader Context and Litigation Landscape
California’s action occurs within a federal regulatory void, following the rescission of prior AI executive orders and EEOC guidance [7]. This has positioned states as the primary regulators, with over 25 states introducing related workplace legislation in 2025 [7]. Pending California legislation, like SB 7 (the “No Robo Bosses Act”), could add further requirements for notice, human oversight structures, and worker appeal rights [7], [8]. The real-world risk is already materializing in active litigation. A prominent example is a collective action lawsuit against Workday, alleging age discrimination through its AI-powered hiring software [4]. This case highlights the direct litigation risk for employers who rely on third-party AI tools, underscoring the need for rigorous vendor management and contractual safeguards.
Actionable Compliance Checklist for Technical and Security Teams
The convergence of legal compliance and technical implementation demands a cross-functional approach. The following steps provide a roadmap for security, IT, and infrastructure teams to support organizational compliance.
- Audit & Inventory: Collaborate with HR and procurement to identify every ADS tool used in employment decisions, including those embedded in vendor platforms [1], [4].
- Vendor Management & Contract Review: Question vendors on their bias testing methodologies, data transparency, and security practices. Review and update contracts to include indemnification clauses and warranties of compliance with anti-discrimination laws [2], [4].
- Data Governance & Recordkeeping: Update data retention policies to ensure all employment records, including ADS input/output data, are retained for four years. This may require changes to database archiving, backup schedules, and storage solutions [1], [5].
- Implement Oversight & Accommodation Workflows: Technically enforce and log human review steps for significant decisions. Ensure clear, secure, and accessible pathways for individuals to request accommodations related to ADS use [2].
- Establish Governance: Form a dedicated team involving HR, Legal, IT, and Security to set policies for AI use, approve new tools, and monitor ongoing compliance [4], [7].
For security professionals, these regulations create a new class of data—”automated-decision system data”—that must be protected with the same rigor as other sensitive personnel information. The extended retention period also expands the attack surface and data breach liability window, making robust encryption, access controls, and monitoring essential. The requirement for bias auditing aligns with broader data integrity and model security principles, as poisoned training data or adversarial attacks on AI models could lead to discriminatory outcomes and subsequent legal action.
In conclusion, California’s new AI employment regulations represent a significant shift, treating algorithmic discrimination as a tangible legal and operational risk. Compliance is not a one-time checkbox but an ongoing program of technical auditing, vendor management, and data governance. For technical teams, this means moving beyond viewing AI tools as black-box solutions and instead managing them as regulated systems with specific security, integrity, and compliance requirements. As other states follow suit and litigation progresses, the practices established now will define an organization’s risk posture in the increasingly regulated landscape of workplace AI.
References
- Jackson Lewis, “California’s New AI Regulations Take Effect Oct. 1: Here’s Your Compliance Checklist,” Aug. 27, 2025.
- Wilson Turner Kosmo, “Special Alert: CRD Approves New AI Regulations – What Employers Need to Know,” Jul. 10, 2025.
- Davron, “California Expands Anti-Discrimination Regulations to Cover AI in…,” Jul. 17, 2025.
- Holland & Hart, “New AI Hiring Rules and Lawsuits Put Employers on Notice: What HR Needs to Know,” May 22, 2025.
- CDF Labor Law LLP, “New Proposed Regulations Will Impact How Businesses Utilize AI to Make Personnel Decisions,” Feb. 13, 2025.
- The Washington Post, “What to do if you fear AI is discriminating against you at work,” Dec. 1, 2025.
- Sheppard Mullin (Labor & Employment Law Blog), “Where Are We Now With the Use of AI in the Workplace?” Jun. 16, 2025.
- SW&M, “California’s New AI Employment Regulations and What Employers Need to Know,” Sep. 24, 2025.
- SHRM, “New California AI Rules: Employers Liable for Discrimination,” Oct. 13, 2025.