Ch 9 — Building an AI Policy

Creating practical AI governance that protects your organization without killing innovation
Regulatory content current as of March 2026 — verify before acting on legal guidance
High Level
target
Scope
arrow_forward
foundation
Principles
arrow_forward
gavel
Rules
arrow_forward
groups
Roles
arrow_forward
refresh
Review
arrow_forward
campaign
Communicate
-
Click play or press Space to begin the journey...
Step- / 8
warning
Why You Need an AI Policy Now
The risk of inaction is bigger than the risk of imperfect policy
The Reality on the Ground
Your employees are already using ChatGPT for work. They’re drafting emails, summarizing documents, generating reports, and troubleshooting problems — with or without your permission. Your vendors are embedding AI into tools you already pay for, often without explicit notification. As of 2026, only about 28% of organizations have a formal AI policy. That means the vast majority are operating without guardrails. Without a policy, you don’t have governance — you have hope. Hope that nobody pastes employee SSNs into a public AI tool. Hope that managers aren’t using AI to make termination decisions. Hope isn’t a strategy.
The Cost of Waiting
No Policy
Employees use whatever tools they want. Confidential data leaks to public AI services. Inconsistent AI use across departments. No audit trail. Legal exposure when something goes wrong. You find out about problems after the damage is done.
Imperfect Policy
Employees have clear guidelines even if incomplete. Confidential data has basic protections. Consistent baseline across the org. A framework to iterate on. You find out about issues through the process, not the lawsuit.
Right now: 78% of knowledge workers are using AI at work, yet only 28% of organizations have a formal AI policy. Meanwhile, the regulatory landscape has accelerated: California CRD regulations (Oct 2025), Colorado AI Act (June 2026), Texas HB 149 (Jan 2026), and the Eightfold AI class action are raising the stakes. Federal AI legislation is expected by late 2026. Your people are using AI — the question is whether you have any visibility or control.
checklist
Acceptable Use Guidelines
What employees can and can’t use AI for
Building the Boundaries
An acceptable use policy needs three components: approved tools (which AI products are sanctioned), approved uses (what tasks are permitted), and required disclosures (when employees must flag that AI was used). The goal isn’t to ban everything — it’s to create clear lanes. Employees work better when they know the rules, and most want guardrails on something this new.
Required Disclosures
Always disclose: AI-generated content in external communications, AI-assisted employment decisions, AI-generated candidate assessments, AI-drafted legal or compliance documents

Optional disclosure: AI-assisted internal drafts, AI grammar/editing assistance, AI-summarized meeting notes
Acceptable vs. Not Acceptable
Acceptable Uses
Drafting job descriptions with approved tools. Summarizing non-confidential documents. Generating first drafts of internal communications. Brainstorming and research assistance. Grammar and editing support. All with human review before publishing.
Not Acceptable
Pasting confidential employee data into public AI tools. Making employment decisions solely based on AI output. Using AI for performance evaluations without manager review. Submitting AI-generated work as original analysis without disclosure. Using unapproved AI tools for any work purpose.
Ops tip: Start with a short list of approved tools and approved uses. It’s easier to expand permissions than to claw them back. Employees will test the boundaries — make sure those boundaries are clearly written and easily found.
shield
Data Classification for AI
Not all data is equal — and AI tools need different rules for each tier
Why Classification Matters
The biggest risk with AI isn’t that employees use it — it’s what data they feed into it. A recruiter summarizing a public job posting in ChatGPT? Low risk. A benefits analyst pasting employee health information into the same tool? Catastrophic. Your data classification determines which AI tools can touch which data. Without it, you’re trusting every employee to make that judgment call in the moment.
Classification Matrix
PUBLIC (Low Risk) Job postings, company blog content, public press releases, published policies → Any approved AI tool INTERNAL (Medium Risk) Org charts, internal memos, meeting notes, project plans, non-sensitive reports → Enterprise AI tools with data agreements only CONFIDENTIAL (High Risk) Salaries, performance reviews, disciplinary records, compensation data, strategy docs → On-premise or private-instance AI only RESTRICTED / PII (Do Not Use) SSNs, health information, bank details, immigration documents, EEO data → Do NOT use with any AI tool
Key distinction: Enterprise versions of AI tools (like ChatGPT Enterprise or Claude for Business) typically have data processing agreements that prevent training on your data. Free consumer versions do not. This distinction must be in your policy.
account_tree
AI Governance Structure
Who owns AI decisions? The RACI model for AI governance
The RACI Model
AI governance can’t live in one department. It requires a cross-functional model. RACI defines who does what:

Responsible: Does the work. For AI, this is often HR ops (policy writing), IT (tool implementation), or the business unit using the tool.
Accountable: Has final authority. Typically the CHRO or COO for HR AI, CIO/CISO for data security.
Consulted: Provides input. Legal, compliance, DEI, employee relations.
Informed: Needs to know. Employees, managers, the board.
AI Governance Committee
AI Governance Committee Structure Chair: CHRO or VP of HR Operations Members: • IT Security / CISO representative • Legal / Employment Law counsel • DEI / Employee Relations lead • Procurement representative • Business unit sponsor (rotating) Cadence: Monthly meetings, ad-hoc for incidents Charter: Approve new AI tools, review incidents, update policy, track regulatory changes Authority: Can block AI tool deployment, mandate audits, require remediation
Why this matters: Without a governance committee, AI decisions happen by default. The vendor pushes an update, the manager enables a feature, the recruiter signs up for a trial — and nobody assessed the risk. The committee creates a decision checkpoint.
shopping_cart
Procurement Policy for AI Tools
Requirements before any AI tool is purchased or used
The Problem with Organic Adoption
AI tools are incredibly easy to adopt. A free trial, a Chrome extension, a Slack integration — and suddenly confidential HR data is flowing to a vendor you’ve never vetted. Procurement policy for AI isn’t about slowing innovation — it’s about ensuring the tools you adopt are safe to use with your data. Every AI tool that touches employee data needs to go through a structured review before deployment.
Procurement Checklist
Before Any AI Tool Is Approved: 1. Security Review SOC 2 Type II? Data encryption? Access controls? Where is data stored? Who can access it? 2. Privacy Assessment Does the vendor train on your data? Data retention policy? Right to delete? GDPR/CCPA compliance? 3. Bias Audit (employment-related tools) Independent bias audit results? Adverse impact analysis by protected class? NYC Local 144 compliance if applicable? 4. Legal Review Contract terms, liability, indemnification? IP ownership of AI-generated content? 5. Data Classification What data tier will this tool access? Is the tool certified for that tier?
Shadow AI: Gartner estimates that by 2027, over 40% of AI-related data breaches will come from unauthorized AI tool usage. Your procurement policy needs teeth — and a discovery mechanism for tools being used without approval.
notifications_active
Employee Notification & Consent
When must you tell employees AI is being used?
The Transparency Imperative
Employees have a right to know when AI is being used in decisions that affect them — and in many jurisdictions, it’s the law. NYC Local Law 144 requires employers to notify candidates when AI is used in hiring and provide bias audit results. California CRD regulations (Oct 2025) require meaningful human oversight and alternative assessments. Colorado AI Act (June 2026) mandates transparency and appeal processes. Illinois and Maryland have similar requirements. The EU AI Act classifies employment AI as “high risk” with extensive transparency requirements. Even where not legally required, transparency builds trust.
Candidate Notification
NYC Local 144 requires:
• 10 business days notice before using AEDT in hiring
• Published bias audit results (updated annually)
• Information about data retention and alternative process

Even outside NYC, best practice is to disclose AI use in the hiring process. Candidates are increasingly asking — and their trust matters for your employer brand.
When to Notify vs. When to Get Consent
NOTIFY (inform, no opt-in required) AI-assisted resume screening AI-generated interview scheduling AI chatbot for benefits Q&A AI-powered learning recommendations CONSENT (explicit opt-in required) AI video interview analysis AI-based employee monitoring AI sentiment analysis on communications AI health/wellness predictions AVOID (high legal risk even with consent) AI-based emotion detection at work AI predictive termination scoring AI social media surveillance of employees
Default to transparency: When in doubt, disclose. The reputational cost of employees discovering undisclosed AI use is far greater than the cost of proactive communication. Make transparency your default, not your exception.
fork_right
Policy Exceptions & Escalation
No policy covers everything — build a system for edge cases
Why Exceptions Matter
The fastest way to kill an AI policy is to make it rigid. A policy without an exception process becomes a policy people ignore. AI moves too fast for a static document to cover every scenario. A manager will need to use an unapproved tool for a time-sensitive project. A vendor will add an AI feature mid-contract. A new regulation will create requirements your policy doesn’t address. Build the exception process before you need it.
Exception Request Process
1. Request: Requester fills out a standard form describing the use case, tool, data involved, and timeline
2. Assessment: Security and legal do a rapid review (SLA: 5 business days)
3. Decision: Governance committee approves, denies, or grants temporary approval with conditions
4. Documentation: All exceptions logged with expiration dates and review triggers
5. Follow-up: Temporary exceptions auto-expire and require renewal
Escalation Path
Level 1: Manager + HR Business Partner // Low-risk, within existing approved tools SLA: 2 business days Level 2: AI Governance Committee // New tools, new use cases, medium risk SLA: 5 business days Level 3: CHRO + General Counsel // High-risk, regulatory implications, // employment decision tools SLA: 10 business days Emergency: CISO + Legal (immediate) // Data breach, compliance violation, // active legal threat SLA: 24 hours
Keep it alive: Review your exceptions log quarterly. If you see the same exception requested multiple times, it’s time to update the policy. Exceptions are data about where your policy has gaps — use them.
campaign
Communicating the Policy
How to roll out an AI policy without creating fear
Framing Is Everything
The single biggest mistake organizations make is framing their AI policy as a ban. “This is guardrails, not a ban.” Guardrails keep you on the road — they don’t stop you from driving. Your policy should feel like enablement with safety measures, not a list of things people can’t do. Lead with what employees can do, then explain the boundaries. Acknowledge that AI is a powerful tool and the policy exists to help people use it well.
The Rollout Plan
Week 1: Executive announcement — brief, positive, links to full policy
Week 2: Manager toolkit — talking points, FAQ, how to handle team questions
Week 3: Town halls by business unit — Q&A, live demos of approved tools
Week 4: Training sessions — hands-on practice with approved tools and guidelines
Ongoing: FAQ document (living), Slack/Teams channel for questions, monthly tips
Communication Channels
Executive Announcement Short email from CHRO or CEO Tone: "We're embracing AI responsibly" Link to policy, training schedule, FAQ Manager Toolkit One-page overview of policy Talking points for team meetings Common questions and approved answers Escalation contacts Employee FAQ "Can I use ChatGPT for work?" → Yes, with guidelines "Will AI replace my job?" → Augment, not replace "What if I already used AI?" → Amnesty period "How do I get a new tool approved?" → Request process Ongoing Monthly AI tips newsletter Quarterly policy review updates Annual training refresher
The amnesty period: Consider a 30-day amnesty window where employees can disclose past AI use without consequences. You’ll learn far more about shadow AI usage than any audit could uncover, and you’ll build goodwill for the new policy.
Sources & Further Reading
• “Only 28% of organizations have a formal AI policy” — industry survey data, 2026
• NYC Local Law 144 — candidate notification requirements
• California CRD Regulations (Oct 2025) — transparency and alternative assessment requirements
• Colorado SB 24-205 (June 2026) — disclosure and appeal requirements
• GDPR Articles 13-14, 22 — automated decision-making transparency

Regulatory content current as of March 2026. Verify before acting on legal guidance.