
What it is: A clear-eyed look at how AI hiring tools are creating real legal exposure for companies, especially those hiring across borders.
Why it matters: Most founders using AI to screen candidates have no idea how much legal risk is sitting quietly in that decision. The lawsuits are already here. The regulations are tightening. And the companies getting hit hardest are the ones who assumed the vendor handled it.
What to know:
Red flags:
Bottom line: AI in hiring isn't inherently bad. But deploying it without understanding the legal and ethical risks is one of the fastest ways to turn a hiring shortcut into a company-ending lawsuit.
The shift to AI-powered hiring happened fast and for good reason. In 2024 alone, AI hiring tools processed over 30 million applications while triggering hundreds of discrimination complaints. These tools screen resumes in seconds, rank candidates automatically, and eliminate hours of manual review. For founders hiring at scale, the efficiency case is obvious.
The problem is what happens underneath. AI tools learn patterns from historical hiring data. If past hiring decisions were biased, and statistically most were, the AI learns and repeats those patterns at industrial scale. What a biased hiring manager might have done to 50 candidates, an AI does to 50,000. The bias isn't random. It's systematic, and it compounds.
Research from the University of Washington found that AI resume screening tools favored white-associated names in 85.1% of cases, and Black male candidates were disadvantaged in up to 100% of direct comparisons with white male candidates. That's not a minor variance. That's a structural problem sitting inside tools that most of Fortune 500 companies now use.
Here's the mechanism, in plain terms. An AI tool is trained on data, in this case, records of past hiring decisions. It learns what a "good" candidate looks like based on who was hired before. If your company historically hired mostly men for engineering roles, the algorithm learns that men make better engineers. It's not a bug. It's the system functioning exactly as designed, on flawed inputs.
The bias also shows up through what lawyers call proxy discrimination, where the AI uses seemingly neutral information that turns out to correlate with protected characteristics. Zip codes correlate with race. University names correlate with class. Employment gaps are more common among women who took time to raise children. Models that use inputs like education, zip codes, and language patterns can function as proxies for protected traits, producing discriminatory outcomes even when the tool appears neutral on the surface.
Video interview tools take this further. HireVue's AI-based video interview platform faced criticism for its facial and speech analysis tools, which disproportionately disadvantaged non-native English speakers and neurodiverse candidates, rating applicants lower based on accents, facial expressions, and even background noise. The algorithm scores your face and your voice. Most candidates don't know this is happening.
The opacity problem makes all of this worse. Most AI hiring vendors treat their scoring logic as proprietary. A candidate gets a score of 47 out of 100. What does that mean? Which signals drove the rejection? Neither the candidate nor your HR team can find out. That's not just unfair. It's a legal liability, because regulators are now asking the same question.

The courts have stopped treating AI bias as theoretical. The cases are real, the liability is real, and the trend is accelerating.
The landmark case right now is Mobley v. Workday. In May 2025, the case achieved class certification as a nationwide collective action, potentially covering millions of job seekers over 40 who claim Workday's AI screening tool unlawfully discriminated based on race, age, and disability. Courts are increasingly willing to consider whether AI vendors themselves can be held liable. That precedent changes everything for anyone buying AI hiring software.
There are more. In September 2023, iTutorGroup paid $365,000 to settle the EEOC's first AI screening discrimination lawsuit, after its software automatically rejected female applicants over 55 and male applicants over 60, turning away over 200 qualified candidates based solely on age. In 2025, EEOC charges were reportedly filed against Intuit and HireVue after a deaf Indigenous applicant was denied captioning during an automated video interview and subsequently rejected, with the AI feedback telling her to "practice active listening."
The number of AI hiring discrimination lawsuits is rising due to converging legal pressures: widespread adoption of unvalidated tools, measurable statistical harm to protected groups, and courts increasingly willing to hold both employers and vendors liable for discriminatory outcomes. If you are using an AI hiring tool right now, you are operating inside that liability environment.
There is no single law governing AI in hiring in the United States. Instead, you get a patchwork of state rules, each with different requirements, different definitions, and different enforcement timelines.
New York City's Local Law 144 requires independent annual bias audits for any automated hiring tool, with public reporting of results and it's already in effect. California's Civil Rights Council regulations, effective October 2025, require employers to test proactively for bias, keep detailed records for four years, provide alternative assessments for candidates who could be disadvantaged, and in some cases extend obligations to vendors, particularly where they design or control employment-related AI systems. Colorado's AI Act, effective June 2026, requires rigorous impact assessments for "high-risk" systems, a category that explicitly includes hiring tools. Illinois and Texas have their own frameworks, both live as of January 2026.
The practical consequence: a tool that's fully compliant in one state could still create legal exposure in another. If you're recruiting nationwide in the US, you're potentially navigating a growing patchwork of requirements across multiple jurisdictions. No unified federal law exists yet to simplify this.

If you are hiring globally, the compliance picture doesn't just get more complex, it multiplies. Every country where you screen a candidate triggers a different legal framework. Not different versions of the same rules. Different rules entirely.
Under GDPR's Article 22, job applicants already have the right not to be subject to fully automated hiring decisions that significantly affect them, unless specific legal safeguards are in place. That means a candidate in Germany, France or anywhere in the EU can demand a human review of an AI rejection. They can ask what logic drove the decision. Most AI hiring tools can't provide this explainability because they use models that process data on external servers, when the decision happens inside a black box, you can't explain it to candidates, and that directly violates GDPR's transparency requirements.
The data question compounds this further. EU regulators have issued 193 GDPR fines specifically in the employment sector, totalling €360.9 million and the ICO's 2024 audit found AI recruiting tools retaining candidate data indefinitely without candidate knowledge, which regulators flagged as a clear violation. Candidate data has a shelf life under GDPR. Most AI tools weren't built with that constraint in mind.
The US adds its own layer. Over 400 AI-related bills were introduced across 41 US states in 2024 alone. If you are hiring across multiple states and into the EU simultaneously, you are navigating frameworks that don't align, don't reference each other, and in some cases directly conflict. Your AI vendor almost certainly built their tool for one market. The compliance gap between markets is your problem to close, not theirs.
You don't need to stop using AI in hiring. You need to use it with your eyes open.
Start by asking your AI vendor three direct questions: What data was your tool trained on? What bias testing has been done, and can I see the results? What happens when a candidate requests an explanation for an automated rejection? If the answers are vague, assume the risk is yours, not the vendor’s.
California's regulations make clear that vendors can be held liable, but employers remain responsible for ensuring AI tools used in hiring do not produce unlawful discriminatory outcomes and must maintain a trained human with authority to override AI decisions. That human override is not a formality. It's a legal requirement in an increasing number of jurisdictions, and it's the right call regardless.
If you hire across borders, get compliance advice specific to each market before deploying AI screening tools. The patchwork of rules is genuinely complicated. Talk to us about compliant cross-border hiring.
AI has made hiring faster. It's also made it riskier in ways that weren't visible until the lawsuits started arriving. The companies that come out of this period ahead aren't the ones who abandoned AI tools, they're the ones who understood what those tools were actually doing and built the oversight structure to stay on the right side of the law.
The regulations are tightening. The lawsuits are accelerating. And if you hire across borders, the compliance complexity multiplies with every market you enter.
Don't wait for a complaint to find out where your exposure is. Hit us up and let's look at your hiring setup before it becomes a legal problem. Building compliant global teams? Get free consultation on what your current processes actually need.
Yes. Employers remain responsible for discriminatory outcomes, even with third-party tools.
A bias audit checks if outcomes differ across demographic groups. It’s required in places like NYC and is the clearest way to catch legal risk early.
Yes, if you screen EU-based candidates. The EU AI Act applies based on candidate location, with key deadlines in 2026.
In the EU, yes, under GDPR, they can request explanations and human review. In the US, this depends on state laws but is expanding.
Yes. Using any automated screening tool brings you into scope. Size matters less than whether your hiring process creates risk.
Manage top talent and scale effortlessly with confidence, our EOR service has you covered.