Ch 7 — Compliance & Risk

The mechanics of AI compliance — audits, contracts, incident response, and governance processes
Regulatory content current as of March 2026 — verify before acting on legal guidance
Under the Hood
search
Identify
arrow_forward
analytics
Assess
arrow_forward
build
Mitigate
arrow_forward
monitoring
Monitor
arrow_forward
fact_check
Audit
arrow_forward
summarize
Report
-
Click play or press Space to begin the deep dive...
Step- / 8
calculate
Disparate Impact Analysis: The Math
The four-fifths rule calculation with real numbers
How the Four-Fifths Rule Works
The four-fifths rule is the EEOC’s primary tool for identifying adverse impact. The math is straightforward: calculate the selection rate for each demographic group, then compare each group’s rate to the group with the highest selection rate. If any group’s rate is less than 80% of the highest rate, there is evidence of adverse impact.

Selection rate = Number selected ÷ Number of applicants in that group

This applies at every stage where AI influences a decision: resume screening, interview invitations, assessments, and final offers. Each stage must be analyzed independently.
Worked Example
AI Resume Screening Results: Group A: 200 applicants, 120 advanced Selection rate = 120/200 = 60% Group B: 150 applicants, 60 advanced Selection rate = 60/150 = 40% Group C: 100 applicants, 50 advanced Selection rate = 50/100 = 50% Four-Fifths Test: Highest rate: Group A at 60% 80% threshold: 60% × 0.80 = 48% Group B: 40% < 48% ADVERSE IMPACT Group C: 50% > 48% PASSES // Group B's selection rate is only 66.7% // of Group A's rate — well below the 80% // threshold. This AI tool has a problem.
Important nuance: The four-fifths rule is a guideline, not a strict legal threshold. Courts consider it alongside other evidence. But failing the four-fifths test shifts the burden — you’ll need to prove the selection criteria are job-related and consistent with business necessity.
analytics
Conducting a Bias Audit
Step-by-step process from scope definition to final report
The Audit Process (90-120 Days)
Phase 1: Scope & Setup (Weeks 1-2)
Define which AI tools are in scope. Identify the decisions each tool influences. Engage an independent third-party auditor — NYC Local Law 144 requires independence, and best practice demands it everywhere.

Phase 2: Data Collection (Weeks 3-5)
Collect demographic data for all individuals processed by the AI tool. This requires careful handling — demographic data may come from self-identification, EEOC reporting, or visual survey. Ensure data is anonymized for the auditor.

Phase 3: Analysis (Weeks 6-10)
Calculate selection rates by race/ethnicity, sex, and age. Test intersectionality (e.g., Black women vs. white men, not just Black vs. white). Analyze which input features drive disparate outcomes. Run statistical significance tests.

Phase 4: Report & Remediation (Weeks 11-14)
Document findings, identify root causes, recommend mitigation strategies, and create an action plan.
Intersectionality Testing
Don't just test single categories: Insufficient: Male vs. Female White vs. Non-White Thorough: White Male vs. Black Male White Female vs. Black Female White Male vs. Black Female Hispanic Male vs. White Male Asian Female vs. White Female // Test all meaningful intersections Why it matters: A tool might pass the four-fifths test for race AND for sex separately — but fail when you test Black women specifically against white men. // This is how bias hides in aggregate // statistics. Intersectional testing // is where the real disparities // surface.
Who conducts it: The auditor must be independent — not the AI vendor, not your internal team. Look for firms specializing in I/O psychology, algorithmic auditing, or employment testing validation. Budget $25K-$75K per audit depending on scope.
gavel
EEOC Enforcement Mechanics
How a discrimination complaint flows when AI is involved
The Complaint Flow
Important context: The EEOC’s 2023 guidance on AI in hiring was removed from its website following the rescission of the Biden-era AI executive order. However, the enforcement mechanism is unchanged — Title VII, ADA, and ADEA are the legal basis for every charge, not the guidance documents. The guidance was explanatory; the statutes are the source of liability.

1. Charge filed: An applicant or employee files a charge of discrimination with the EEOC, alleging that an AI-driven process discriminated against them.

2. Investigation: The EEOC investigates. They will ask for documentation of the AI tool, its decision-making process, selection rates by demographic group, and any bias audits conducted. This is where documentation either saves you or sinks you.

3. Finding: The EEOC either finds “reasonable cause” (evidence supports discrimination) or “no reasonable cause.”

4. Conciliation or litigation: If reasonable cause is found, the EEOC attempts conciliation (settlement). If that fails, the EEOC may file a lawsuit — or issue a right-to-sue letter allowing the individual to sue directly.

5. Resolution: Settlements in AI discrimination cases have included monetary damages, required bias audits, changes to hiring practices, and ongoing monitoring agreements.

New front — private litigation: The Eightfold AI class action (January 2026) alleges the platform created secret applicant scoring dossiers without disclosure, potentially violating the Fair Credit Reporting Act and California consumer reporting laws. This case demonstrates that EEOC enforcement is not the only path — private class actions under FCRA and state consumer laws are an emerging and significant threat.
The Burden of Proof Shift
Standard discrimination case: 1. Plaintiff shows adverse impact 2. Employer must prove business necessity 3. Plaintiff can show less discriminatory alternative existed When AI is involved, it gets harder: 1. Plaintiff shows adverse impact (same as above) 2. Employer must prove business necessity BUT: "the vendor told us it was fine" is NOT a valid defense 3. Employer must show the AI's criteria are job-related and consistent with business necessity BUT: if it's a black box model, you may not be able to explain what the criteria even are // This is why explainability matters. // You need to articulate WHY the AI // makes the decisions it makes.
Documentation is everything: When the EEOC comes knocking, your best defense is thorough documentation: why you chose this tool, what due diligence you conducted, bias audit results, how you monitored for adverse impact, and what you did when issues were found. The absence of documentation is itself evidence of negligence.
description
Contract Provisions for AI Vendors
Specific clauses your AI vendor contracts must include
Why Standard Contracts Aren’t Enough
Standard SaaS contracts were not designed for AI. They cover uptime, data security, and liability caps — but they don’t address model transparency, bias accountability, algorithmic change notification, or audit access. If your AI vendor’s tool creates adverse impact, your standard contract likely doesn’t give you the tools to investigate, remediate, or recover damages. You need AI-specific provisions.
Non-Negotiable Clauses
Bias audit rights: You can commission independent bias audits at any time. Vendor must cooperate and provide necessary data and access.

Model documentation: Vendor must provide model card, training data description, validation methodology, and known limitations upon request.

Change notification: Vendor must notify you in writing at least 30 days before any model update, retraining, or algorithmic change that could affect outcomes.

Sub-processor disclosure: Vendor must disclose any third parties that process your data, including AI model providers they use under the hood.
Contract Template: Key Provisions
AI VENDOR CONTRACT PROVISIONS 1. BIAS AUDIT RIGHTS Employer may conduct or commission independent bias audits of the AI system at any time. Vendor shall provide all data, access, and cooperation necessary. Cost borne by vendor if audit reveals adverse impact. 2. INDEMNIFICATION Vendor shall indemnify Employer against claims, damages, and costs arising from discriminatory outcomes of the AI system, including but not limited to EEOC charges and litigation. 3. DATA PROCESSING & DELETION All candidate/employee data processed solely for Employer's purposes. Upon termination, Vendor shall delete all data within 30 days and provide written certification. 4. AUDIT ACCESS Employer may inspect model logic, training data methodology, and decision criteria. Vendor may not claim trade secret protection to deny access during a compliance audit. 5. MODEL CHANGE NOTIFICATION Written notice 30 days prior to any model update. Employer may terminate if update materially changes outcomes. 6. FCRA & CONSUMER REPORTING COMPLIANCE Vendor warrants that its AI scoring does not constitute a "consumer report" under FCRA, or if it does, Vendor shall comply with all FCRA disclosure and consent requirements. // See: Eightfold AI class action (Jan 2026) // Secret scoring dossiers = FCRA exposure
Negotiation reality: Large vendors may push back on some of these. That’s a data point. A vendor confident in their AI welcomes audit rights. A vendor that won’t grant indemnification for bias may know something about their model they’re not telling you. The Eightfold AI class action (Jan 2026) — alleging secret applicant scoring dossiers violated FCRA — makes clause 6 (FCRA compliance) especially critical. If your vendor generates applicant scores, ensure FCRA obligations are addressed in writing.
policy
EU AI Act Technical Requirements
What “conformity assessment” actually means and what you need to build
Quality Management System
The EU AI Act requires a quality management system (QMS) for high-risk AI. This is a documented set of policies and procedures covering the entire AI lifecycle:

Design & development: How AI tools are selected, configured, and validated before deployment.
Data management: How training data is sourced, cleaned, assessed for bias, and maintained.
Testing & validation: How the system is tested against accuracy, robustness, and fairness benchmarks.
Deployment: How the system is rolled out, monitored, and maintained in production.
Post-market monitoring: How you track performance, detect drift, and respond to incidents after deployment.
Human Oversight Mechanisms
The Act requires that high-risk AI systems allow effective human oversight. This means humans must be able to: understand the system’s capabilities and limitations, monitor its operation, interpret its outputs correctly, override or halt the system at any time, and intervene in real-time when necessary.
Conformity Assessment Checklist
EU AI ACT — HIGH-RISK CONFORMITY [ ] Risk management system established [ ] Data governance procedures documented [ ] Technical documentation complete // System description, design specs, // training methodology, validation [ ] Record-keeping & logging active // Automatic logs of all operations // sufficient for post-hoc audit [ ] Transparency measures implemented // Users informed of AI use, // instructions for use provided [ ] Human oversight mechanisms in place [ ] Accuracy & robustness validated // Tested against adversarial inputs, // edge cases, and data drift [ ] Cybersecurity measures adequate [ ] Conformity declaration filed [ ] CE marking affixed (if applicable) [ ] Post-market monitoring plan active
Practical tip: Even if you don’t operate in the EU, use this checklist as your internal standard. It’s the most comprehensive AI governance framework available, and US regulations are converging toward similar requirements. Building to this standard now means you won’t be scrambling later.
emergency
Building an AI Incident Response Plan
When bias is discovered or AI makes a harmful decision: your playbook
Why You Need a Plan Before You Need It
AI incidents will happen. A bias audit reveals adverse impact. A candidate complains that the AI rejected them unfairly. A model update changes outcomes dramatically. The question isn’t whether, it’s when. Having a documented incident response plan before an incident occurs is the difference between a controlled response and a chaotic scramble that makes the legal situation worse.
The Five Phases
1. Containment: Stop the AI from making additional potentially harmful decisions. This may mean suspending the tool, reverting to manual processes, or adding human review to every decision.

2. Investigation: Determine scope and root cause. How many decisions were affected? Which demographic groups were impacted? What caused the issue — model drift, data problem, vendor update?

3. Remediation: Fix the root cause. This may involve retraining, reconfiguring, or replacing the AI tool. Review all affected decisions and determine if any need to be reversed.

4. Notification: Notify affected individuals, regulatory bodies (if required), and leadership. Timing and content of notification may be governed by regulation.

5. Prevention: Update monitoring to catch similar issues earlier. Document lessons learned. Update the risk register.
Incident Response Template
AI INCIDENT RESPONSE PLAN PHASE 1: CONTAINMENT (0-4 hours) [ ] Suspend AI tool or add human review [ ] Notify incident response team [ ] Preserve all logs and data [ ] Document initial scope estimate PHASE 2: INVESTIGATION (1-5 days) [ ] Identify affected decisions/individuals [ ] Determine root cause [ ] Calculate demographic impact [ ] Assess regulatory reporting obligations [ ] Engage legal counsel PHASE 3: REMEDIATION (1-4 weeks) [ ] Fix root cause in AI system [ ] Review and potentially reverse decisions [ ] Re-engage affected individuals [ ] Validate fix with bias testing PHASE 4: NOTIFICATION (as required) [ ] Notify affected individuals [ ] File regulatory reports if required [ ] Brief leadership/board PHASE 5: PREVENTION (ongoing) [ ] Update monitoring thresholds [ ] Document lessons learned [ ] Update risk register
Legal privilege: Involve legal counsel from Phase 1. Much of the investigation may be protectable under attorney-client privilege if structured correctly. Do not circulate preliminary findings broadly — work through counsel.
folder_open
Documentation Requirements
What to document, how to retain it, and why it’s your strongest defense
What to Document and Retain
Model selection rationale: Why this tool was chosen over alternatives. What evaluation criteria were used. What due diligence was performed.

Training data description: What data the model was trained on (from the vendor). How representative it is. Known limitations.

Validation results: Pre-deployment testing results, including bias testing across demographic groups.

Bias audit results: All bias audit reports, findings, and remediation actions taken.

Deployment decisions: Who approved deployment, what risk assessment was conducted, what human oversight was established.

Override logs: Every instance where a human overrode the AI’s recommendation, with the reason documented.

Complaint records: Any complaints from candidates or employees about AI-driven decisions, with investigation notes and outcomes.
Retention Periods by Jurisdiction
RETENTION REQUIREMENTS: Federal (EEOC): Hiring records: 1 year from decision EEOC charge records: until resolution + 1 yr OFCCP (federal contractors): Selection records: 2 years AAP documentation: 2 years NYC Local Law 144: Bias audit results: 4 years Published summary: continuously available EU AI Act: Technical documentation: 10 years Automatic logs: 6 months minimum Conformity documentation: 10 years California CRD (Oct 2025): All AI decision records: 4 years Bias testing documentation: 4 years State laws (varies): Illinois AIVRA: consent records retained Colorado AI Act (June 2026): impact assessments: 3 yrs BEST PRACTICE: Retain everything for the longest applicable period + 1 year. When in doubt, don't delete.
Documentation culture: The biggest risk isn’t making a wrong decision about AI — it’s being unable to explain why you made the decision you made. Build documentation into your AI governance workflows so it happens automatically, not as an afterthought.
event_repeat
Annual AI Governance Review
A structured annual review process for AI compliance and risk management
Why Annual Isn’t Enough (But Start There)
AI systems drift, regulations change, vendors update models, and your workforce evolves. An annual governance review is the minimum cadence for a comprehensive assessment. High-risk tools should have quarterly bias monitoring in between. The annual review is a structured process to step back and evaluate the full picture: are we compliant, are our tools performing, and are we prepared for what’s coming?
The Annual Review Process
1. Inventory update: Confirm the AI tool inventory is complete. Identify any new AI tools deployed since last review.
2. Compliance assessment: Map current regulations to each tool. Identify any new regulations that have taken effect.
3. Bias audit review: Review all bias audit results from the past year. Track trends across audit cycles.
4. Incident review: Assess any AI incidents that occurred. Evaluate response effectiveness and lessons learned.
5. Vendor evaluation: Review vendor compliance with contract provisions. Assess any model updates and their impact.
6. Policy updates: Update AI governance policies based on regulatory changes and operational learnings.
7. Leadership reporting: Present findings, risks, and recommendations to leadership or the board.
Governance Calendar Template
AI GOVERNANCE ANNUAL CALENDAR Q1 — JANUARY-MARCH [ ] Complete AI tool inventory update [ ] Review regulatory changes from prior year [ ] Commission annual bias audits [ ] Review and renew vendor contracts Q2 — APRIL-JUNE [ ] Receive and review bias audit results [ ] Conduct vendor performance reviews [ ] Update risk register [ ] Mid-year compliance check Q3 — JULY-SEPTEMBER [ ] Review incident log and response actions [ ] Test incident response plan (tabletop) [ ] Update AI governance policies [ ] Train HR team on policy updates Q4 — OCTOBER-DECEMBER [ ] Comprehensive annual governance review [ ] Prepare leadership/board report [ ] Set priorities for next year [ ] Publish updated bias audit summaries
Chapter 7 complete. You now have the operational framework for AI compliance: the math behind adverse impact, how to run bias audits, what your contracts need, how to respond to incidents, what to document, and how to structure ongoing governance. This is your turf — own it.
Sources & Further Reading
• EEOC Uniform Guidelines on Employee Selection Procedures (29 CFR Part 1607) — four-fifths rule
• Title VII, ADA, ADEA — federal anti-discrimination statutes
• NYC Local Law 144 — bias audit and notification requirements
• California CRD Regulations on Automated Decision Systems (Oct 2025) — 4-year recordkeeping
• Colorado SB 24-205 (June 2026) — impact assessments
• Texas HB 149 (Jan 2026) — intent standard, no disparate impact standalone
• EU AI Act, Regulation 2024/1689 — high-risk classification, conformity assessment
• Fair Credit Reporting Act (FCRA) — applicable to third-party AI scoring; see Eightfold AI case (2026)

Regulatory content current as of March 2026. Verify before acting on legal guidance.