Before proceeding, please review the legal disclaimer.
Artificial intelligence is changing how companies hire, promote, discipline, and even terminate employees. From resume-screening software to automated performance scoring systems, AI tools are now deeply embedded in workplace decision-making.
But what happens when AI makes biased decisions?
In Texas, employment discrimination laws apply whether decisions are made by a human manager—or by an algorithm. If AI systems disproportionately impact certain groups, employers may still be legally responsible.
Here’s what employees and employers should understand about discrimination involving AI.
Many Texas employers use AI-driven tools to:
Screen resumes
Rank job applicants
Analyze video interviews
Evaluate employee productivity
Predict “culture fit”
Flag policy violations
Recommend terminations
These systems are often marketed as objective and data-driven. However, AI is only as neutral as the data it is trained on.
AI systems may unintentionally replicate or amplify existing biases.
For example:
If an algorithm is trained on historical hiring data that favored one demographic group, it may continue favoring that group.
Resume filters may exclude applicants from certain schools or zip codes that correlate with race or socioeconomic status.
Facial analysis software may perform less accurately for certain racial groups.
Automated scheduling systems may disadvantage employees with caregiving responsibilities.
Even if no one intended discrimination, the outcome can still be unlawful.
In Texas, discrimination is prohibited under federal laws that protect against bias based on:
Race
Color
National origin
Sex
Pregnancy
Religion
Age
Disability
These protections apply regardless of whether the decision was made by:
A human supervisor
A hiring committee
A third-party vendor
An AI algorithm
Employers cannot avoid liability by blaming the software.
AI-related discrimination often falls under the legal theory of disparate impact.
Disparate impact occurs when:
A neutral policy or tool
Disproportionately harms a protected group
And is not justified by business necessity
For example, if an AI hiring tool systematically filters out older applicants at a higher rate, that may raise age discrimination concerns—even if the tool never explicitly considers age.
Some Texas employers use AI to monitor:
Keystrokes
Productivity metrics
Time spent on tasks
Customer interactions
If automated monitoring results in harsher discipline for certain demographic groups, it may create discrimination risks.
Bias can enter through:
Incomplete data
Flawed assumptions
Improper weighting of performance factors
Generally, the employer is responsible for employment decisions—even if those decisions rely on third-party software.
Employers have a duty to:
Evaluate the tools they use
Monitor outcomes
Address bias
Ensure compliance with anti-discrimination laws
Delegating decisions to software does not eliminate legal obligations.
Employees and applicants may notice patterns such as:
Repeated automated rejections without interviews
Lack of diversity in promotions
Disproportionate discipline tied to performance software
Sudden termination decisions tied to metrics
Patterns matter more than isolated incidents.
AI in employment is an evolving legal issue. While Texas does not yet have AI-specific employment statutes, existing anti-discrimination laws already cover algorithmic decision-making.
Regulators and courts increasingly recognize that technology-driven discrimination is still discrimination.
Employers who adopt AI tools without oversight may expose themselves to legal risk.
If you believe AI tools may have contributed to discrimination:
Document the decision-making process
Compare treatment across groups
Save rejection notices or communications
Review job qualifications versus hiring outcomes
Note sudden changes tied to automated systems
Because AI systems often lack transparency, legal evaluation may require deeper analysis.
AI discrimination cases are often more complicated than traditional cases because:
Algorithms may be proprietary
Data may be confidential
Bias may be statistical rather than obvious
Intent may be difficult to prove
These cases often focus on outcomes and patterns rather than explicit statements.
Artificial intelligence may streamline hiring and workplace decisions, but it does not eliminate legal responsibilities. In Texas, employers remain accountable for discriminatory outcomes—even when decisions are automated.
AI is a tool. When that tool produces biased results, the law still applies.
As AI becomes more common in employment decisions, understanding how discrimination laws intersect with technology is more important than ever.
Follow our newsletter to stay updated.
2025- The Lange Firm all rights reserved.
Mr. Evan B. Lange is the attorney responsible for this website. | All meetings are by appointment only. | Principal place of business: Sugar Land and Houston, Texas.
The information you obtain at this site is not, nor is it intended to be, legal advice. You should consult an attorney for advice regarding your individual situation. We invite you to contact us and welcome you to submit your claim for review. Contacting us does not create an attorney-client relationship. Please do not send any confidential information to us until such time as an attorney-client relationship has been established.