Ch 9 — Building an AI Policy

The detailed mechanics of creating, implementing, and maintaining AI governance — templates, checklists, and frameworks
Regulatory content current as of March 2026 — verify before acting on legal guidance
Under the Hood
inventory_2
Inventory
arrow_forward
label
Classify
arrow_forward
edit_document
Draft
arrow_forward
policy
Legal
arrow_forward
verified
Approve
arrow_forward
rocket_launch
Implement
-
Click play or press Space to begin the deep dive...
Step- / 8
inventory_2
AI Tool Inventory
Before you can govern AI, you need to know what you have
The Discovery Problem
Most organizations dramatically undercount their AI tools. Your ATS has AI screening you may not have enabled. Your HRIS vendor shipped an AI assistant in the last update. Three managers signed up for different AI note-taking tools. Someone in recruiting is using a free Chrome extension that reads candidate profiles. You can’t govern what you can’t see. A systematic inventory is the non-negotiable first step.
Four Discovery Methods
1. Vendor audit: Review every HR tech vendor contract for AI/ML features, including recently added capabilities
2. Shadow AI survey: Anonymous survey asking employees what AI tools they use for work (emphasize no punishment)
3. Network monitoring: Work with IT to identify traffic to known AI services (ChatGPT, Claude, Gemini, Midjourney, etc.)
4. Procurement records: Review all software purchases, subscriptions, and expense reports for AI-related tools
Inventory Template
AI TOOL INVENTORY Tool Name: ________________________ Vendor: ________________________ Category: [ATS | HRIS | Benefits | LMS | Payroll | Analytics | General AI | Other] AI Features: ________________________ Data Accessed: [Public | Internal | Confidential | PII] Users: [Department/Role list] Approved? [Yes | No | Pending Review] DPA in Place? [Yes | No | N/A] Last Security Review: ________ Employment Decision? [Yes | No] Bias Audit? [Yes | No | N/A] Owner: ________________________ Notes: ________________________
Expect surprises: When organizations first do an AI inventory, they typically find 3–5x more AI tools in use than they expected. Don’t panic — that’s the point. You can’t manage risk you don’t know about. As of 2026, only 28% of organizations have a formal AI policy — if you’re doing this inventory, you’re already ahead of most.
label
Risk Classification Framework
Classify every AI use case by risk level, aligned with EU AI Act categories
Why Risk-Based Classification
Not all AI use carries the same risk. An AI tool that schedules meetings is fundamentally different from one that scores candidates for hiring. Your governance effort should be proportional to risk — light oversight for low-risk uses, heavy oversight for high-risk. This is also the approach the EU AI Act takes, and even if you’re not subject to it today, it’s becoming the global standard for AI governance frameworks.
Classification Criteria
Impact on individuals: Does the AI affect employment, compensation, access to opportunities?
Data sensitivity: What data does it access or process?
Autonomy level: Does a human review the output before action is taken?
Reversibility: Can the decision be easily undone if wrong?
Scale: How many people are affected?
Risk Classification Matrix
MINIMAL RISK (Standard oversight) Meeting scheduling, email drafting assistance, document formatting, internal search Governance: General acceptable use policy Review: Annual LIMITED RISK (Enhanced oversight) Report generation, data summarization, chatbot for general HR Q&A, learning recs Governance: Approved tool list + usage logging Review: Semi-annual HIGH RISK (Strict oversight) Resume screening, candidate ranking, performance analysis, compensation modeling, promotion recommendations, workforce planning Governance: Bias audit + human-in-the-loop + adverse impact monitoring + legal review Review: Quarterly UNACCEPTABLE (Prohibited) Automated termination decisions, emotion detection for evaluation, social scoring, subliminal manipulation of employees Governance: Banned. No exceptions.
Regulatory alignment: The EU AI Act explicitly classifies “AI systems used in employment, workers management and access to self-employment” as high-risk. US states are following suit: California CRD regulations (Oct 2025) require meaningful human oversight for automated decision tools, the Colorado AI Act (June 2026) mandates transparency and appeal processes, and Texas HB 149 (Jan 2026) adds disclosure requirements. Even if you’re not subject to these yet, aligning with the highest standard future-proofs your policy.
edit_document
Policy Template: Section by Section
A complete AI policy template with every section you need
Policy Structure Overview
A comprehensive AI policy needs ten core sections. Some will be short, some substantial — but skipping any of them leaves a gap that will surface during an incident. Don’t try to write the perfect version. Write the first version and plan to iterate. A policy that exists and is imperfect is infinitely better than a perfect policy that’s still in draft.
Writing Tips
Plain language: If a frontline manager can’t understand it, it won’t be followed
Concrete examples: Don’t just say “prohibited” — give a specific scenario
Version control: Date every version, track changes, archive previous versions
Living document: State the review cadence in the policy itself
Complete Policy Outline
1. PURPOSE & SCOPE Why this policy exists, who it applies to, what "AI" means in this context 2. DEFINITIONS AI, ML, LLM, automated decision tool, AI-assisted vs AI-automated 3. ACCEPTABLE USE Approved tools, approved use cases, required disclosures 4. PROHIBITED USE Specific prohibitions with examples (e.g., PII in public tools) 5. DATA HANDLING Classification tiers, which data goes where, data retention, deletion requirements 6. PROCUREMENT Approval process, security requirements, bias audit requirements 7. GOVERNANCE Committee structure, RACI, decision authority 8. COMPLIANCE Applicable laws, notification requirements, audit obligations 9. ENFORCEMENT Violation process, consequences, reporting 10. REVIEW & UPDATE Review cadence, update triggers, approval workflow, version history
Scope tip: Explicitly state that this policy applies to AI features embedded in existing tools, not just standalone AI products. Most AI risk comes from features buried inside tools employees already use, not from ChatGPT.
grid_on
RACI Matrix for AI Governance
Detailed RACI for common AI decisions in HR
Why RACI for AI
AI governance fails when nobody knows who owns the decision. The most common failure mode isn’t bad policy — it’s unclear ownership. Legal thinks IT is reviewing bias. IT thinks HR is handling compliance. HR thinks legal cleared it. Meanwhile the tool is live and nobody did the review. The RACI matrix eliminates this ambiguity for every common AI governance decision.
RACI Template
R=Responsible A=Accountable C=Consulted I=Informed Tool Selection & Approval HR Ops: R IT: C Legal: C CHRO: A Biz Unit: I Data Access Decisions IT: R HR Ops: C Legal: C CISO: A Biz Unit: I Bias Audit HR Ops: R DEI: C Legal: C CHRO: A Vendor: I Incident Response IT: R HR Ops: R Legal: A CHRO: I Comms: C Policy Exceptions HR Ops: R Legal: C IT: C Committee: A Requester: I Vendor Management Procurement: R IT: C Legal: C HR Ops: A Finance: I Employee Communication HR Ops: R Comms: C Legal: C CHRO: A Managers: I
One accountable person: The “A” in RACI must be exactly one person or role for each decision. If two people are accountable, nobody is. This is where governance breaks down — be explicit, even if it feels uncomfortable.
policy
Legal Review Checklist
What your legal team needs to evaluate for every AI tool
Why Legal Review Is Non-Negotiable
AI in HR operates at the intersection of employment law, privacy law, contract law, and intellectual property law. A single AI tool can create exposure in all four areas simultaneously. Your legal team needs a structured checklist — not because they don’t know the law, but because AI creates novel combinations of risk that don’t map cleanly to any single practice area.
Key Legal Domains
Employment law: Title VII, ADA, ADEA implications; state-specific AEDT laws (NYC Local 144, California CRD, Colorado AI Act, Texas HB 149); note that the 2023 EEOC guidance on AI in hiring was pulled from the website by the current administration, but the underlying statutes (Title VII, ADA, ADEA) still apply and create liability regardless
Privacy law: CCPA/CPRA, GDPR, state biometric laws (BIPA), employee monitoring laws
Contract law: Vendor terms, liability allocation, indemnification, service levels
IP law: Ownership of AI-generated content, training data rights, confidentiality
Legal Review by Risk Area
EMPLOYMENT LAW Adverse impact analysis performed? Four-fifths rule compliance verified? ADA accommodation process intact? State AEDT notification requirements met? EEOC 2023 guidance pulled — but Title VII/ADA/ADEA still apply? State AEDT laws: CA CRD, CO AI Act, TX HB 149? FCRA compliance? (see Eightfold AI class action, Jan 2026) PRIVACY LAW Data processing agreement (DPA) executed? Data retention/deletion terms defined? Cross-border data transfer addressed? Biometric data collection (if any) consented? Employee monitoring laws satisfied? CA CRD 4-year recordkeeping requirement met? CONTRACT TERMS Vendor liability for AI errors defined? Indemnification for bias claims included? Right to audit vendor’s AI systems? Termination rights if compliance fails? INTELLECTUAL PROPERTY Ownership of AI-generated output clear? Vendor training on your data restricted? Confidentiality of inputs guaranteed? LIABILITY & INSURANCE E&O / cyber insurance covers AI use? EPLI updated for AI-related claims?
Don’t skip this: The legal landscape for AI in employment is evolving rapidly. The 2023 EEOC guidance was pulled, but the underlying statutes still create full liability. New state laws (CA CRD, CO AI Act, TX HB 149), FCRA class actions like the Eightfold AI suit (Jan 2026), and pending court decisions are creating obligations that didn’t exist 12 months ago. Budget for quarterly legal landscape reviews.
school
Training Program Design
Different audiences need different training — design for each
Audience-Specific Training
A one-size-fits-all AI training program fails everyone. An executive needs to understand strategic risk and governance. A manager needs to know the policy and how to handle team questions. A frontline employee needs to know which tools to use and what data to protect. Design for the decision each audience actually makes.
Training Frequency
Initial onboarding: Required within 30 days of policy launch (or new hire start)
Annual refresher: Required for all employees, 30 minutes max
Policy update training: Required within 15 days of material policy changes
Role-specific deep dives: Quarterly for managers, semi-annual for executives
Incident-driven: After any AI-related incident, targeted training within 30 days
Curriculum Outline
EXECUTIVE TRAINING (60 min, semi-annual) • AI risk landscape and liability • Governance committee role and authority • Board reporting requirements • Strategic AI investment decisions • Regulatory forecast MANAGER TRAINING (90 min, quarterly) • Full policy walkthrough • How to handle team AI questions • Identifying unauthorized AI use • Exception request process • When to escalate • Assessment: scenario-based quiz EMPLOYEE TRAINING (45 min, annual) • Approved tools and use cases • Data classification quick reference • What NOT to put into AI tools • Disclosure requirements • How to request new tools • Assessment: pass/fail quiz SPECIALIST TRAINING (varies) Recruiters: AI in hiring compliance HRIS team: AI feature configuration IT: AI security monitoring
Make it practical: The best AI training includes hands-on exercises with approved tools. Let employees practice using ChatGPT Enterprise with safe data during training, rather than just lecturing about the policy. Muscle memory beats memorization.
monitoring
Policy Monitoring & Enforcement
How to ensure compliance — and when to coach vs. when to enforce
Monitoring Mechanisms
A policy without monitoring is a suggestion. You need technical and procedural controls working together:

Usage logging: Enterprise AI tools should log usage for audit purposes. Work with IT to ensure logging is enabled.
Network monitoring: Track traffic to known AI services — not to punish, but to understand scope.
Periodic audits: Quarterly review of AI tool usage against approved list.
Anonymous reporting: A channel for employees to report concerning AI use without fear of retaliation.
Coach vs. Enforce
Coach first when: First offense, no malicious intent, no data breach, employee was unaware of policy, the violation is a gray area

Enforce when: Repeated violations after coaching, deliberate policy circumvention, PII or confidential data exposed, employment decisions made without required human review, refusal to complete training
Violation Response Process
LEVEL 1: Education First-time, low-risk violation Action: Manager conversation, policy review, document in file (non-disciplinary) // Most violations fall here LEVEL 2: Formal Warning Repeat violation or medium-risk first offense Action: Written warning, mandatory training, usage monitoring for 90 days LEVEL 3: Disciplinary Action Serious or willful violation Action: Suspension of AI tool access, formal disciplinary process, potential termination for egregious cases // Consult ER and legal before action LEVEL 4: Incident Response Data breach or compliance violation Action: Immediate containment, legal notification, regulatory reporting if required, post-incident review, policy update
Culture over compliance: Your goal is a culture where employees want to follow the AI policy, not one where they hide their AI use out of fear. Lead with education for the first 6 months. Enforcement catches the willful violators; culture catches everyone else.
event_repeat
Annual Policy Review Process
A structured approach to keeping your AI policy current
Why Annual Isn’t Enough
AI moves faster than annual review cycles. A formal annual review is the minimum, but you also need triggers for off-cycle updates: new regulations, significant AI incidents (yours or industry), major vendor changes, new AI capabilities that change the risk profile. The annual review is the comprehensive check; trigger-based updates handle the urgent stuff.
Review Inputs
Regulatory changes: New laws, enforcement actions, agency guidance; federal AI legislation expected late 2026/early 2027 may harmonize the current state patchwork
Incident analysis: Every AI-related incident from the past year, root causes, and whether policy gaps contributed
Technology changes: New AI capabilities, vendor updates, new tools in the market
Employee feedback: Training evaluations, exception requests, survey data on AI usage
Vendor updates: Changes to vendor AI features, terms of service, or data practices
Benchmark comparison: How peer organizations are handling AI governance
Annual Review Template
ANNUAL AI POLICY REVIEW Review Date: ________ Version: ___ 1. REGULATORY SCAN Federal law changes (EEOC, FTC, DOL) Expected federal AI legislation (late 2026/early 2027) State law changes (CA CRD, CO AI Act, TX HB 149, etc.) International requirements (EU AI Act, etc.) Industry-specific guidance & class action trends 2. INCIDENT REVIEW All AI incidents cataloged and analyzed Root cause: policy gap vs. compliance gap Remediation actions completed 3. TECHNOLOGY REVIEW AI tool inventory updated New vendor AI features assessed Risk classifications current 4. STAKEHOLDER INPUT Employee feedback collected Manager feedback collected Exception log reviewed for patterns 5. POLICY UPDATES Sections requiring revision identified Drafts reviewed by legal Governance committee approved Communication plan for changes Training updated to reflect changes APPROVAL CHRO: ________ Date: ________ Legal: ________ Date: ________ CISO: ________ Date: ________
Next up: Chapter 10 covers AI implementation planning — how to take everything you’ve learned in this course and build a 90-day action plan for bringing AI governance to life in your organization. Policy is the blueprint; implementation is the build.
Sources & Further Reading
• “Only 28% of organizations have a formal AI policy” — industry survey data, 2026
• EU AI Act risk classification framework — Regulation 2024/1689
• California CRD Regulations (Oct 2025) — 4-year recordkeeping
• Colorado SB 24-205 (June 2026)
• Texas HB 149 (Jan 2026)
• FCRA — Eightfold AI class action (Jan 2026)
• Federal AI legislation expected late 2026/early 2027

Regulatory content current as of March 2026. Verify before acting on legal guidance.