Ch 7 — Compliance & Risk

Navigating the regulatory landscape for AI in employment — federal, state, local, and international requirements
Regulatory content current as of March 2026 — verify before acting on legal guidance
High Level
account_balance
Federal
arrow_forward
map
State
arrow_forward
location_city
Local
arrow_forward
public
Global
arrow_forward
fact_check
Audit
arrow_forward
shield
Govern
-
Click play or press Space to begin the journey...
Step- / 8
gavel
The Regulatory Landscape
AI in employment is now regulated at every level — and the landscape is moving fast
The New Reality
AI in employment is now regulated at every level of government: federal (Title VII, ADA, ADEA), state (California, Colorado, Texas, Illinois), local (NYC), and international (EU AI Act). The Biden-era AI executive order was rescinded and the EEOC removed its 2023 AI guidance from its website — but this does not reduce employer liability. Title VII, ADA, and ADEA still apply in full force. If AI produces disparate impact, the employer is liable regardless of whether federal guidance documents exist. As an HR ops leader, compliance with AI regulations is a present-day operational requirement.
Why It’s Accelerating
High-profile cases: Amazon’s biased hiring AI, HireVue’s facial analysis controversy, the iTutorGroup EEOC settlement, and the Eightfold AI class action (Jan 2026) — alleging secret applicant scoring dossiers that may violate FCRA — put AI bias squarely on lawmakers’ radar.

State momentum: California CRD regulations (Oct 2025), Colorado AI Act (delayed to June 2026), and Texas HB 149 (Jan 2026) represent the most detailed state-level AI employment regulations yet. Federal legislation is expected by late 2026 or early 2027 to harmonize the state patchwork.

Public awareness: Employees and candidates increasingly know that AI is making decisions about them — and they expect transparency.
Regulatory Layers
FEDERAL (applies everywhere) Title VII, ADA, ADEA apply to AI decisions // EEOC 2023 guidance pulled, but // underlying statutes still in force FCRA applies to AI scoring tools STATE (varies by jurisdiction) California: CRD Regs (Oct 2025) — most detailed yet: bias testing, human oversight, 4-yr recordkeeping Colorado: AI Act / SB 24-205 (June 2026) Texas: TRAIGA / HB 149 (Jan 2026) Illinois: AI Video Interview Act Maryland: Facial recognition ban LOCAL (city-level) NYC Local Law 144: Bias audits required Other cities watching and drafting INTERNATIONAL (if you operate globally) EU AI Act: High-risk classification Canada: AIDA (proposed) UK: Pro-innovation AI framework COMING (expected late 2026 / early 2027) Federal AI legislation to harmonize the state patchwork
Ops reality: The removal of federal guidance does not create a compliance vacuum — it makes the landscape harder to navigate, not easier. You need to know which regulations apply to your organization, which AI tools are in scope, and who owns compliance. This chapter gives you the map.
account_balance
EEOC & Federal Requirements
Title VII applies to AI decisions — and the employer is liable, not the vendor
The Legal Foundation
Title VII of the Civil Rights Act prohibits employment discrimination based on race, color, religion, sex, or national origin. The EEOC’s 2023 guidance on AI in hiring was removed from its website following the rescission of the Biden-era AI executive order. However, the underlying law has not changed. Title VII applies to AI-driven decisions just as it does to human decisions. If your AI screening tool disproportionately rejects candidates from a protected class, that’s a potential Title VII violation — regardless of whether federal guidance documents exist.

Emerging litigation: The Eightfold AI class action (January 2026) alleges the platform created secret applicant scoring dossiers without disclosure, potentially violating the Fair Credit Reporting Act and California consumer reporting laws. This case signals a new front for AI hiring liability beyond Title VII.
Disparate Impact vs. Disparate Treatment
Disparate treatment: Intentionally treating people differently based on protected class. Rare in AI, but possible if protected characteristics are used as input features.

Disparate impact: A facially neutral practice that disproportionately affects a protected group. This is the primary AI risk. An algorithm doesn’t need to “intend” to discriminate — if the outcome is discriminatory, you have a legal problem.
The Four-Fifths Rule
The Four-Fifths (80%) Rule: The selection rate for any group must be at least 80% of the rate of the group with the highest selection rate. Example: Group A selection rate: 60% Group B selection rate: 40% Ratio: 40/60 = 66.7% Below 80% threshold = adverse impact // This applies to every stage where AI // is making or influencing selection: // resume screening, interview scheduling, // assessments, final recommendations
Critical: You are liable as the employer even if the vendor built the AI. The removal of EEOC guidance does not change this: the use of algorithmic decision-making tools does not change an employer’s obligations under Title VII. The statute, not the guidance, creates the liability. You cannot outsource compliance to a vendor.
map
State Laws: The Patchwork
A growing web of state-level AI regulations with different requirements
Key State Laws
California CRD Regulations (effective October 2025): The most detailed state AI employment regulations yet. Unlawful to use automated decision systems that discriminate on protected traits. Requires meaningful human oversight, proactive bias testing, 4-year recordkeeping, and alternative assessments for candidates. This is now the strictest state standard.

Colorado AI Act / SB 24-205 (delayed to June 2026): Covers “high-risk” AI in employment. Requires annual impact assessments, transparency, documentation, and appeal processes. Violations constitute an unfair trade practice.

Texas TRAIGA / HB 149 (January 2026): Focuses on government agencies but establishes that AI cannot be used with intent to discriminate. Notably rejects disparate impact as standalone liability — diverging from California and other states.

Illinois AI Video Interview Act (2020): Employers must notify candidates that AI is analyzing their video interview, explain how the AI works, and obtain consent.

Maryland HB 1202 (2020): Bans the use of facial recognition in job interviews unless the applicant signs a waiver.
The Compliance Patchwork Problem
Risky Approach
Trying to comply with each state’s law individually. Different disclosure language for Illinois candidates, different consent forms for Maryland, different data rights for California. Creates operational complexity, errors, and audit nightmares.
Smart Approach
Comply with the strictest standard across the board. If Illinois requires AI disclosure for video interviews, disclose for all candidates everywhere. If California grants data rights, extend them to all employees. One standard, universally applied.
Strategy: Build your AI governance to the highest common standard — currently California CRD. It’s simpler to administer, reduces legal risk, and future-proofs you as more states pass laws. Federal AI legislation is expected by late 2026 or early 2027 to harmonize the patchwork. The trend is toward more regulation, not less.
location_city
NYC Local Law 144: The Template
The first major local law on AI in hiring — and why other cities are watching
What It Requires
NYC Local Law 144, effective July 2023, regulates automated employment decision tools (AEDTs) — any AI system that “substantially assists or replaces” hiring or promotion decisions. The requirements are specific:

1. Annual bias audit: An independent third party must conduct a bias audit annually, analyzing selection rates by race/ethnicity and sex.

2. Published results: The audit summary must be publicly posted on the employer’s website.

3. Candidate notification: Candidates must be notified at least 10 business days before the AEDT is used, told what job qualifications the tool assesses, and given the option to request an alternative process.
Enforcement & Impact
Enforcement status: Penalties: $500 first violation $500-$1,500 each subsequent // Penalties seem small, but the real risk // is the reputational damage of publishing // a bias audit that shows adverse impact What counts as an AEDT: YES: AI resume screeners, chatbot evaluators, ML-scored assessments, AI interview analysis NO: Spam filters on applications, scheduling bots, basic keyword search in ATS Why it matters nationally: // NYC is the template. Other major cities // (LA, Chicago, DC) are drafting similar laws. // If you operate nationally, prepare now.
Ops action: If you use any AI in hiring — even through a vendor — inventory those tools now. Determine if they qualify as AEDTs. If you hire in NYC (or plan to), you need a bias audit process in place. Even if you don’t, it’s coming to your city.
public
EU AI Act: High-Risk Classification
Employment AI is classified as “high-risk” — the strictest regulatory tier
What the EU AI Act Means for HR
The EU AI Act, which began phased enforcement in 2024, classifies AI systems by risk level. Employment AI is explicitly classified as “high-risk,” meaning it faces the strictest requirements. This applies to any AI used in: recruitment, resume screening, job advertising, interview evaluation, promotion decisions, termination decisions, task allocation, and performance monitoring.

If your organization employs people in the EU, or uses AI tools that process EU residents’ data, this applies to you.
High-Risk Requirements
Conformity assessment: Formal evaluation that the AI system meets all EU AI Act requirements before deployment Data governance: Training data must be relevant, representative, free of errors, and complete Human oversight: Humans must be able to understand, monitor, and override AI decisions Transparency: Users must be informed when AI is being used and understand how it works Record-keeping: Automatic logging of all AI system operations for traceability and audit Penalties: Up to €35 million or 7% of global revenue // These are GDPR-level penalties
Global impact: Like GDPR before it, the EU AI Act is setting the global standard. Even if you only operate in the US, your AI vendors are likely building to EU requirements — and US regulations are borrowing heavily from this framework.
handshake
Vendor Liability: Who’s Responsible?
If your AI vendor’s tool discriminates, YOU are liable as the employer
The Liability Reality
Here is the most important legal principle in AI compliance: if your vendor’s AI tool discriminates, you — the employer — are liable. Not the vendor. Not the developer. You. The EEOC has made this clear: employers cannot outsource their Title VII obligations. A vendor’s AI is your tool once you deploy it. If it creates adverse impact, it’s your problem in court.
Vendor Due Diligence
Before signing: Request bias audit results, model documentation, and data governance practices. If they can’t provide these, walk away.

During contract: Negotiate audit rights, indemnification clauses, and transparency requirements.

After deployment: Conduct your own bias monitoring. Don’t rely solely on vendor assurances.
What Your Contracts Need to Say
Must-Have Contract Provisions: 1. Bias audit rights Right to conduct independent bias audits at any time, at vendor's expense 2. Model documentation Vendor must provide model card, training data description, and validation results 3. Indemnification Vendor indemnifies employer for claims arising from AI bias or discrimination 4. Audit access Employer can inspect the model, data, and decision logic upon request 5. Data deletion All candidate/employee data deleted upon contract termination 6. Change notification Vendor must notify before any model updates that could affect outcomes
Red flag: If a vendor resists any of these provisions, that tells you something about their confidence in their own product. Serious AI vendors welcome transparency requirements because they’ve already done the work.
checklist
Building a Compliance Framework
A practical framework for AI governance in HR operations
The Five-Step Framework
1. Inventory: Catalog every AI tool in use across HR — including AI features embedded in your ATS, HRIS, and other platforms that you may not think of as “AI tools.”

2. Classify: Assign a risk level to each tool based on what decisions it influences: high (hiring, firing, compensation), medium (scheduling, routing, categorization), low (drafting, summarization, search).

3. Assign ownership: Every AI tool needs a named owner responsible for compliance, monitoring, and incident response.

4. Schedule audits: High-risk tools need bias audits at least annually. Medium-risk tools need periodic review. Low-risk tools need annual inventory confirmation.

5. Document everything: Every decision about AI deployment, every audit result, every incident — documented and retained.
Compliance Checklist Template
AI COMPLIANCE CHECKLIST [ ] Complete AI tool inventory [ ] Classify each tool by risk level [ ] Assign compliance owner per tool [ ] Review vendor contracts for required clauses [ ] Obtain/schedule independent bias audits [ ] Verify candidate/employee notification process [ ] Document AI decision-making rationale [ ] Establish incident response protocol [ ] Set up ongoing monitoring cadence [ ] Map applicable regulations by jurisdiction [ ] Train HR team on AI compliance basics [ ] Brief legal counsel on AI tool usage [ ] Schedule annual governance review [ ] Create regulatory change monitoring process
Start here: If you do nothing else from this chapter, complete the AI tool inventory. You can’t govern what you can’t see. Most organizations are surprised by how many AI-powered tools they’re already using.
shield
The Risk Register
Tracking AI risks alongside your existing risk management processes
Why a Dedicated AI Risk Register
AI risks are different from traditional HR technology risks. They involve model drift (performance degrades over time), emergent bias (bias that appears only at scale), and regulatory velocity (laws changing faster than your compliance cycle). A dedicated AI risk register, integrated with your existing risk management framework, gives you visibility and control.

Each AI tool should have an entry tracking: what it does, what data it uses, what decisions it influences, who owns it, when it was last audited, and its current compliance status across all applicable jurisdictions.
Risk Register Template
AI RISK REGISTER Tool: [Name of AI tool/feature] Vendor: [Vendor name] Use case: [What it does in your process] Risk level: High | Medium | Low Data used: [PII, demographics, performance, etc.] Sensitivity: [Data classification level] Owner: [Named individual] Last audit: [Date of last bias/compliance audit] Next audit: [Scheduled date] Jurisdictions: [Which regulations apply] Status: Compliant | Review needed | Non-compliant Incidents: [Count and date of last incident] Notes: [Open issues, pending actions]
Integration tip: Don’t create a separate risk management process for AI. Add AI risk entries to your existing risk register and governance review cadence. AI risk is just a new category of operational risk — and you already know how to manage that.
Sources & Further Reading
• Title VII of the Civil Rights Act of 1964; Americans with Disabilities Act; Age Discrimination in Employment Act
• EEOC, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI” (guidance removed 2025)
• NYC Local Law 144 (2023) — Automated Employment Decision Tools
• California Civil Rights Department, Regulations on Automated Decision Systems (effective Oct 2025)
• Colorado SB 24-205 (effective June 2026)
• Texas HB 149 (effective Jan 2026)
• EU Artificial Intelligence Act, Regulation 2024/1689
• Eightfold AI class action (Jan 2026)
• Federal AI legislation expected late 2026/early 2027

Regulatory content current as of March 2026. Verify before acting on legal guidance.