Core Policy Components
Human review requirement: Define which decisions require human review before action. At minimum: all rejections of candidates who meet minimum qualifications, any decision affecting protected classes disproportionately, and all final hiring decisions.
Regular bias audits: Conduct adverse impact analyses quarterly (at minimum annually). Compare selection rates across race, gender, age, disability status, and veteran status. Document everything.
Candidate notification: Inform candidates when AI is used in the screening process. Some jurisdictions require this by law (Illinois, NYC) — but doing it everywhere is best practice.
Appeal process: Give candidates a way to request human review of an AI-driven rejection. This isn’t just ethical — it’s increasingly a legal requirement.
Policy Framework Template
RESPONSIBLE AI RECRUITING POLICY
1. SCOPE
All AI tools used in candidate sourcing,
screening, assessment, or selection.
2. HUMAN OVERSIGHT
No candidate shall be rejected solely by
automated means without human review.
3. TRANSPARENCY
Candidates will be notified when AI is
used in evaluating their application.
4. BIAS MONITORING
Quarterly adverse impact analysis across
all protected classes. Results reported
to [Head of HR / Legal / Compliance].
5. VENDOR REQUIREMENTS
All AI recruiting vendors must provide
bias audit results and model documentation.
6. APPEAL RIGHTS
Candidates may request human review of
any AI-influenced screening decision.
7. DATA RETENTION
Candidate data used by AI systems will
be retained per [policy] and deleted
upon candidate request where required.
Next step: Adapt this framework to your organization, get legal review, and socialize it with your recruiting team. In the Under the Hood version of this chapter, we go deep on the technical details: how matching algorithms work, how to run an adverse impact analysis, and the full compliance landscape.