Ch 2 — AI in the HR Tech Stack

Where AI already lives in your ATS, HRIS, payroll, benefits, and learning platforms — and what “AI-powered” actually means
High Level
work
ATS
arrow_forward
badge
HRIS
arrow_forward
payments
Payroll
arrow_forward
health_and_safety
Benefits
arrow_forward
school
Learning
arrow_forward
analytics
Analytics
-
Click play or press Space to begin the journey...
Step- / 8
layers
The AI Layer in Your Tech Stack
You’re probably already using AI — you just don’t know it yet
The Invisible Layer
Here’s a truth that surprises most HR ops leaders: you’re almost certainly already using AI. It’s embedded inside tools you use every day — your ATS parses resumes with it, your HRIS flags anomalies with it, your benefits platform recommends plans with it. The question isn’t “should we adopt AI?” — it’s “what AI are we already using, and are we governing it?” Most organizations have no inventory of the AI running inside their HR tech stack. That’s a governance gap.
Why This Matters
When AI is invisible, it’s ungoverned. Nobody is asking what data it uses, whether it’s biased, or what happens when it’s wrong. You can’t manage what you can’t see. This chapter maps the landscape so you can start seeing it.
Your HR Tech Stack — AI Map
Layer 1: Talent Acquisition ATS (Greenhouse, Lever, iCIMS, Workday Recruiting) → AI: resume parsing, candidate matching, sourcing Layer 2: Core HR / HRIS Workday, BambooHR, UKG, Rippling, ADP → AI: org analytics, anomaly detection, workflows Layer 3: Payroll & Compliance ADP, Paylocity, Paychex, Deel → AI: error detection, tax compliance, auditing Layer 4: Benefits Administration Benefitfocus, Businessolver, Navia → AI: plan recommendations, cost prediction Layer 5: Learning & Development Cornerstone, Degreed, LinkedIn Learning → AI: personalized paths, skills gap analysis Layer 6: People Analytics Visier, One Model, Crunchr, built-in modules → AI: attrition prediction, engagement scoring
Ops action: Before reading further, mentally tick off which of these layers you own or influence. By the end of this chapter, you’ll know exactly what AI questions to ask for each one.
work
ATS: Where AI Is Most Aggressive
Your applicant tracking system has more AI than any other HR tool
The AI Features You’re Using
Applicant tracking systems are the most AI-saturated tools in HR tech. Almost every modern ATS includes: resume parsing (extracting structured data from unstructured documents), candidate matching (scoring applicants against job requirements), interview scheduling (coordinating calendars automatically), and sourcing automation (finding passive candidates). Some now include AI-generated job descriptions, chatbot screening, and video interview analysis. The adoption is aggressive because recruiting has clear, measurable outcomes — time-to-fill, quality-of-hire — that justify the investment.
The Risk Landscape
Recruiting AI also carries the highest legal risk in your stack. NYC Local Law 144 already requires bias audits of automated employment decision tools. The EEOC has issued guidance. The EU AI Act classifies recruiting AI as “high risk.” If your ATS is screening candidates with AI, you need to know how.
Good vs. Bad Vendor Approaches
Red Flags
Vendor can’t explain how matching works. No bias audit available. “Proprietary algorithm” with no transparency. AI auto-rejects candidates without human review. No way to override or appeal AI decisions. Training data sources undisclosed.
Green Flags
Published bias audit methodology. Explainable scoring — you can see why a candidate ranked high or low. AI recommends but human decides. Override and appeal mechanisms built in. Clear documentation of training data. Regular model retraining.
Question to ask your vendor: “If a candidate asks why they were rejected, can we explain what role AI played in that decision?” If the answer is no, you have a compliance problem waiting to happen.
badge
HRIS: The Data Backbone
Everything else in your stack depends on what’s in your HRIS
AI Inside Your HRIS
Your HRIS — whether it’s Workday, BambooHR, UKG, Rippling, or ADP — is increasingly embedding AI into core functions. Org analytics surface patterns in headcount, turnover, and span of control. Anomaly detection flags unusual changes — a sudden spike in terminations, an employee’s data changing unexpectedly. Workflow automation uses AI to route approvals, predict processing times, and prioritize tasks. Natural language search lets managers ask questions in plain English instead of building reports.
Why HRIS Data Quality Is Everything
Here’s the critical insight: your HRIS is the data source that feeds every other AI in your stack. Your ATS pulls job data from it. Your payroll system syncs comp data from it. Your analytics platform queries it. If your HRIS data is messy — inconsistent job titles, missing department codes, outdated org structures — then every AI downstream inherits that mess. AI doesn’t fix bad data. It amplifies it.
The Data Quality Cascade
Clean HRIS Data ├─ ATS gets accurate job specs ├─ Payroll gets correct comp data ├─ Benefits gets right eligibility ├─ Analytics gets reliable inputs └─ AI models produce useful output Messy HRIS Data ├─ ATS matches on wrong criteria ├─ Payroll flags false errors ├─ Benefits misclassifies employees ├─ Analytics shows misleading trends └─ AI models amplify the garbage // "Garbage in, garbage out" is not a cliché // in AI — it's a mathematical certainty.
Ops priority: Before investing in any new AI tool, audit your HRIS data quality. Consistent job titles, complete department hierarchies, accurate employment dates, and clean manager relationships are the foundation. No AI vendor can save you from bad source data.
payments
Payroll & Benefits: Quiet AI
Less flashy, but AI is working behind the scenes in every pay cycle
AI in Payroll
Payroll AI doesn’t get headlines, but it’s doing critical work. Error detection catches anomalies before checks go out — a salary that doubled overnight, overtime that exceeds legal limits, a tax withholding that looks wrong. Compliance monitoring tracks changing tax laws across jurisdictions and flags when your configuration needs updating. Predictive processing estimates run times and identifies potential failures before they happen. When payroll goes wrong, employees notice immediately. AI’s job here is to be the safety net that catches mistakes before they reach paychecks.
AI in Benefits
Plan recommendation engines analyze an employee’s age, family status, salary, and claims history to suggest the best benefits package. Cost modeling predicts total plan costs based on enrollment patterns. Chatbots handle routine benefits questions during open enrollment, freeing your team from answering the same “what’s my deductible?” question 500 times. Most employees interact with benefits AI without ever knowing it.
What Makes This Different
Payroll and benefits AI is defensive rather than offensive. In recruiting, AI is trying to find the best candidate (an optimization problem). In payroll, AI is trying to prevent errors (a detection problem). Different goals, different risks. The risk in recruiting AI is bias. The risk in payroll AI is false confidence — trusting the system caught everything when it didn’t.
The Hidden Data Flow
What Employees Don’t See
AI analyzed their claims history to predict costs. Their benefits recommendation was algorithmically generated. Payroll error detection flagged and auto-corrected a data entry before it became a problem. Their HSA contribution suggestion was AI-optimized.
Why That’s Usually Fine
These are lower-risk AI applications. They’re catching errors, not making employment decisions. The data stays internal. A human still signs off on payroll. But you still need to know it’s there — because employees will eventually ask.
Ops insight: Ask your payroll vendor: “What percentage of flagged anomalies are true positives vs. false alarms?” If they can’t answer, the AI might be creating alert fatigue for your team — too many false flags and people start ignoring all of them.
school
Learning & Development: AI as Coach
Personalized development paths powered by recommendation engines
How L&D Platforms Use AI
Modern learning platforms have borrowed heavily from consumer tech. Personalized learning paths adapt to an employee’s role, performance data, and career goals. Skills gap analysis compares an employee’s current competencies against their target role or industry benchmarks. Content recommendations surface relevant courses, articles, and mentors based on learning history and peer patterns. Adaptive assessments adjust difficulty in real time. The goal is to make corporate learning feel less like mandatory compliance training and more like a personalized development experience.
The Netflix Comparison
Vendors love to say their platform is “Netflix for learning.” Here’s what that actually means: Netflix’s recommendation engine analyzes what you watch, what you skip, how long you watch, and what people similar to you enjoy. L&D platforms do the same thing: what courses did you complete, what did you skip, how long did you spend, and what did people in your role and level find valuable? The difference: Netflix optimizes for engagement. Your L&D platform should optimize for capability development. Make sure those aren’t confused.
The Skills Intelligence Layer
Employee Profile Current role: Sr. HR Generalist Target role: HR Business Partner Tenure: 3.2 years AI Skills Gap Analysis ✓ Strengths: employee relations, compliance ✗ Gaps: strategic planning, data analysis, executive communication AI-Generated Learning Path 1. Strategic HR Planning (course, 4 hrs) 2. HR Analytics Fundamentals (course, 6 hrs) 3. Executive Presence (workshop, live) 4. Peer mentoring match: Sarah K., HRBP // Based on paths of 47 employees who // successfully made this transition
Quality check: Ask your L&D vendor: “Is the AI optimizing for course completion rates or for actual skill development?” If the platform pushes easy, short courses because they have high completion rates, it’s optimizing for the wrong metric. You want capability, not clicks.
analytics
People Analytics: The Promise and the Peril
Where helpful insight crosses the line into surveillance
The Promise
People analytics platforms promise to turn your workforce data into strategic insight. Attrition prediction models identify flight-risk employees before they resign, giving managers time to intervene. Engagement scoring aggregates survey data, collaboration patterns, and performance metrics into a single health indicator. Workforce planning uses AI to model scenarios — “what happens to capacity if we lose 15% of this team?” DEI analytics surface disparities in pay, promotion, and attrition across demographic groups. When done well, this is genuinely powerful for an operations leader who needs to see around corners.
The Peril
The same technology that predicts attrition can feel like surveillance. Monitoring email frequency, meeting attendance, Slack activity, and badge swipes to build a “productivity score” may be technically possible, but it erodes trust. The ethical line isn’t always a legal line — something can be legal and still be corrosive to your culture. As an ops leader, you’re the one who has to draw that boundary.
Where to Draw the Line
Insight (Helpful)
Aggregate trends: turnover by department, time-to-fill by role, engagement by tenure band. Patterns that inform strategy. Employees know what’s measured. Data is anonymized at the individual level.
Surveillance (Harmful)
Individual tracking: keystroke monitoring, screen recording, “active time” scoring. Predictions that become self-fulfilling (“this person will leave, so we stop investing in them”). No transparency about what’s measured.
The trust test: If your employees found out exactly what your analytics platform measures and how, would they feel informed or violated? If the answer is “violated,” you’ve crossed the line — regardless of what the vendor says is “industry standard.”
verified
What “AI-Powered” Actually Means
A decoder ring for vendor marketing language
The Vendor Translation Guide
Vendors have every incentive to call everything “AI.” It commands higher prices, wins RFPs, and sounds innovative. But the gap between marketing language and technical reality can be enormous. “AI-powered search” might be keyword matching with fuzzy logic — a 20-year-old technology. “AI-driven insights” might be pre-built reports with conditional formatting. “Intelligent automation” might be basic if/then workflow rules. None of these are bad features — but calling them AI is misleading and makes it impossible to evaluate what you’re actually buying.
Questions That Cut Through
1. “What data does the AI train on?” — If they can’t answer, it might not be AI.
2. “Does the model improve over time with our data?” — Real AI learns. Rules don’t.
3. “Can you show me a prediction it got wrong?” — If they say it doesn’t make mistakes, it’s not AI (or they’re not being honest).
4. “What happens if we turn off the AI features?” — This reveals how central AI actually is vs. a marketing add-on.
Marketing vs. Reality
Vendor Says Actually Means ──────────────────────────────────────── "AI-powered search" Keyword matching, maybe synonyms "AI-driven insights" Pre-built dashboards with filters "Intelligent matching" Could be real ML, could be rules "Predictive analytics" Might be linear regression (basic) "AI recommendations" Often popularity-based sorting "Machine learning" Usually the real thing — ask to verify "Neural network" Real deep learning — ask about training "NLP-powered" Text analysis — ranges from basic to LLM "Generative AI" Likely an LLM — ask which one, who hosts
The litmus test: Ask the vendor: “If I gave you 10 identical inputs, would the AI give 10 identical outputs?” If yes, it’s probably rules, not AI. Real ML has variability. Real LLMs are non-deterministic. That’s not a flaw — it’s a feature you need to understand.
checklist
Auditing Your Current Stack
A practical worksheet for mapping the AI you already have
Your AI Inventory
The single most valuable thing you can do after reading this chapter is build an inventory of AI in your current stack. For each tool you use, find out: Does it use AI? What kind? What data does it access? Who approved it? Is it documented? Most HR ops teams have never done this audit, and the results are always eye-opening. You’ll likely discover AI features you didn’t know were active, data flows you didn’t authorize, and vendor capabilities you’re paying for but not using.
Questions for Your Account Managers
1. Which features in our contract use AI or ML?
2. What employee data does the AI access?
3. Is our data used to train your models for other clients?
4. Do you have a published bias audit or impact assessment?
5. What happens to AI-generated outputs — are they logged?
6. Can we opt out of specific AI features?
7. What’s your AI incident response process?
AI Stack Audit Checklist
FOR EACH HR TOOL IN YOUR STACK: [ ] Tool name & vendor [ ] Which AI features are enabled? [ ] What employee data does AI access? [ ] Is data sent to external servers / cloud? [ ] Is your data used to train vendor models? [ ] Does AI make decisions or only recommend? [ ] Can employees see AI-generated outputs? [ ] Is there a human review step? [ ] Bias audit available? (Y/N, date) [ ] Documented in your data processing agreement? [ ] Last vendor review date: ___________ RISK RATING: Low = AI assists, human decides, no PII Med = AI recommends, accesses PII High = AI decides, sensitive data, no audit
Next step: Print this checklist (or copy it into a spreadsheet) and fill it out for every HR tool you own. Share it with your IT and legal partners. This document becomes the foundation for your AI governance strategy — and it positions you as the person who saw this coming.