
AI is becoming a regular part of the hiring process, whether applicants realize it or not. Employers now use software to screen resumes, conduct video interviews, and rank candidates before a human ever gets involved. On the surface, it sounds efficient. But the lack of regulation and transparency raises serious concerns, especially when it comes to fairness and discrimination.
Biased Data In, Biased Results Out
One major issue with AI hiring tools is how they’re trained. Many of these systems pull data from the internet or past hiring decisions. If that data reflects bias, so will the AI. For example, if the system learns that most previous hires were white men with certain hobbies or schools in common, it may downgrade candidates who don’t fit that mold. That’s not just unfair, it can be discriminatory.
And it’s not just about race or gender. Age, disability, national origin, and other protected traits could factor into how someone’s resume or online presence is scored. You might never even know it happened. California law prohibits discrimination in hiring based on protected categories under the Fair Employment and Housing Act (FEHA), but if an algorithm does the rejecting, proving that bias becomes harder.
No Standards, No Oversight
Unlike background checks or interview questions, AI screening tools aren’t subject to clear rules. There are few guidelines on what data these systems can use, how they must treat candidates, or how much human review is required. That means decisions can be made based on flawed or even irrelevant factors.
Something as minor as your music preferences or your favorite sports team could play a role. Even if not intended, preferences for specific traits can lead to outcomes that exclude qualified people based on race, age, or cultural background. That can create a discriminatory effect, even if there was no explicit intent to discriminate.
States like California and New York have started to take notice. Proposed legislation aims to improve transparency and ensure hiring algorithms don’t violate civil rights laws. But for now, it’s still mostly unregulated. Without meaningful oversight, workers are left in the dark about how these systems work and how they’re being evaluated.
There’s also the question of who is reviewing the results. AI doesn’t run in a vacuum. There’s still a human somewhere making final decisions, or at least reviewing a shortlist created by the machine. Those humans bring their own biases. Maybe they tend to favor candidates who went to the same college or grew up in the same city as them. That kind of subtle preference can reinforce inequality, especially when it shows up across thousands of hiring decisions.
What Job Seekers Should Keep in Mind
One of the most significant problems is how easy it is to be filtered out by AI without ever knowing why. Online platforms like Indeed make it simple to apply for hundreds of jobs, but also simple for companies to reject large groups of applicants with a single setting. That could include filters based on age, education gaps, or even frequency of doctor visits, whether fair or not.
In many cases, the parameters being used are completely invisible. A hiring platform could automatically discard resumes from candidates over a certain age or downgrade applicants based on health data or gaps in employment. That means large numbers of qualified workers could be excluded from consideration and never find out why.
Your digital footprint also matters. AI tools may scan your social media presence or analyze your speech in a video interview. While that might seem harmless, it opens the door to decisions based on personal style, speech patterns, or appearance, none of which should determine someone’s qualifications.
And because so much of this happens behind closed doors, you may not even realize you’ve been affected. There is currently no requirement for employers to tell applicants when AI tools were used or what data was factored into their decision. That lack of transparency leaves little room to challenge discrimination.
Meanwhile, lawsuits have begun to emerge in other areas, particularly the arts, where AI is repurposing people’s likenesses and creative work without consent. It’s only a matter of time before similar legal questions show up in hiring disputes. As with other legal developments in tech, the law hasn’t caught up to the problem.
Hiring should be about your qualifications, not how well you match an algorithm’s idea of the “ideal” candidate. If you think bias, age discrimination, or unfair filtering played a role in your job search, Sansanowicz Law Group, P.C. can help you evaluate what happened and what steps to take. Contact us to learn more.
Leonard Sansanowicz
Latest posts by Leonard Sansanowicz (see all)
- What Should You Do Before Calling a Lawyer About a Workplace Issue? - April 16, 2026










