Artificial Intelligence (AI) in Employment Law

Artificial Intelligence in employment decisions is the fastest changing and most complex area of employment law. 

AI has already changed the way most employers make employment decisions. As AI in employment decision making develops, regulators and lawmakers are seeking to understand the technology and protect workers. AI developments, however, are increasing at an exponential rate, and lawmakers are struggling to keep up.

Potomac Legal Group is closely following the developments in regulations and enforcement of AI use in the workplace. We represent employees who have faced unlawful actions from employers who rely on traditional human decision making, as well as employers using AI services in employment decisions.

Employers Responsible for Decisions

Fortunately, employees do not have to wait for laws to change or adapt. Regardless of who or what makes an employment decision, the employer is ultimately responsible for its own actions. Current law already addresses unlawful employment actions.

If you have experienced employment discrimination, unlawful termination, or retaliation due to an employment decision made by AI or a human, contact our attorneys to request a review of your matter to determine the options available to you.

Contact Us Today to Discuss Your Matter

What is AI in Employment?

AI in employment refers to the use of artificial intelligence and machine learning in employment-related decision making and in the hiring, promotion, tracking and management of employees. 

Employers may use AI to automate certain aspects of the hiring process, such as resume screening and initial candidate interviews. The use of AI in employment, however, must comply with laws and regulations that prohibit discrimination and harassment while protecting personal data privacy, health information and security.

Unlawful Use of AI in Employment

The use of AI in employment decision-making may result in biased outcomes if the algorithm and training data are not properly designed, selected and tested for bias. Employers must also ensure that their use of AI does not result in a disparate impact for protected groups.

There are multiple federal laws that employers must comply with when using AI in the hiring process. For example, Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA) prohibit employment discrimination on the basis of race, color, religion, sex, national origin, disability, and age.

Additionally, employers may have compliance requirements with respect to federal laws related to data privacy and security, as well as an emerging patchwork of laws in state and jurisdictions, including Washington, D.C., Maryland and Virginia. 

AI Bias in Employment

An employer is not permitted to use AI that discriminates against individuals based on protected characteristics, such as race, gender, religion, age, and disability. Employers are also prohibited from using AI to make employment decisions that have a disparate impact on protected groups. Employers must comply with laws related to data privacy and security when using AI in the workplace.

Employers using AI should take steps mitigate the risk of negative or unintended outcomes, such as by testing the algorithm for bias and ensuring that it is fair and non-discriminatory.

AI Employment Discrimination

Discrimination of employees and applicants by artificial intelligence and other algorithms is unlawful. An AI program need not be openly discriminatory toward individuals in a protected class to be in violation of federal law. 

An AI program may be discriminatory when it selects against characteristics or aspects statistically correlated to a certain class. This may include your hairstyle, your clothing, the college you attended, the types of activities you enjoy or your performance on an AI administered test. 

Determining AI discrimination in employment matters is complex, and the specific facts of your employment are relevant in determining whether an employer has violated your rights.

AI in Hiring & Resume Screening

AI can be used in the hiring process to assist with tasks such as resume screening, candidate matching, and interview scheduling. By automating these tasks, AI can help reduce bias and increase efficiency in the hiring process if the AI system is properly trained and validated to minimize the risk of bias and discrimination. 

If the model is trained on data that has been collected from a biased or discriminatory system, it can perpetuate those biases in its predictions. This can lead to unfair decisions in hiring, promotion, or other employment-related actions. 

Recruitment

Employers may not recruit and hire based on the actions of a discriminatory AI. Problems may arise when programmers overuse data coming from certain classes and characteristics. For example, an employer may have to shut down an AI recruiting tool that shows bias against women because the data it judged resumes against came predominantly from men. Job search and employee selection systems that repeatedly reject a candidate may be discriminatory and require further investigation.

AI in Employee Monitoring & Tracking

AI can be used to track employees in various ways, such as monitoring their work hours, location, computer usage, and productivity. Many systems offer keylogging and screen recording features with triggers that alert an employer when an employee is less than normally productive or accesses unapproved websites, social media accounts or personal documents on their work device. 

The use of AI in tracking employees, however, raises several legal and ethical concerns. 

One of the main legal concerns is the issue of employee privacy. Employers may need to comply with state laws related to data privacy and security when collecting, storing, and using employee data. These laws require employers to obtain employee consent for the collection, use, and storage of personal data. They also require employers to implement appropriate security measures to protect employee data.

The use of AI in employee tracking may also raise concerns about discrimination, especially in regard to physical and mobility concerns. Employers must ensure that their use of AI in tracking employees does not result in a disparate impact on protected groups.

Another concern with AI in tracking employees is that it may create a negative work environment, where employees feel under constant surveillance and pressure to perform at a high level. This can lead to stress, burnout, and a decrease in employee morale and productivity.

Overall, employers must weigh the potential benefits of using AI in tracking employees against the legal and ethical concerns. They should consider seeking legal and ethical advice, and also communicate and consult with employees, before implementing AI tracking systems.

State Law: Employee Monitoring

Currently, Washington, D.C., Maryland nor Virginia have specific laws address that address the use of employee monitoring and tracking software. Maryland and Washington, D.C., are addressing technologies that may be incorporated in monitoring systems.

In Maryland, an employer may not use facial recognition for the purpose of creating a pattern of an applicant’s face without the permission of the employee.

Washington, D.C., does not have specific privacy laws for employees. The city council, however, has proposed the D.C. Stop Discrimination by Algorithms Act of 2023, which likely would govern algorithms used in monitoring systems.

Virginia has no specific privacy laws for employees. Virginia’s corporate data privacy law specifically exempts employees.

Criminal Background Checks

In Maryland and Washington, D.C., employers often are prohibited from requiring applicants to disclose whether they have a criminal record or have had criminal allegations brought against them. These restrictions also extend to AI technologies. Vendors providing AI services may not use this criminal background data in training their algorithms.

Federal Regulation of AI in the Workplace

Federal agencies and Congress are exploring possible regulations for AI in employment. 

The U.S. Equal Employment Opportunity Commission (EEOC) is holding hearings to examine the impact of AI and automated systems on employment decisions. The EEOC is concerned about tools including resume screening, video-interviewing software evaluating facial expressions and speech and software that assesses “job fit” based on personality, aptitude, or skills.

The EEOC recognizes that these AI systems can have a negative impact on protected groups and “particularly vulnerable workers,” including immigrants, individuals with disabilities, those with criminal records, LGBTQI+ individuals, older workers, and those with limited literacy or English proficiency.

Proposed Legislation in Washington, D.C., Maryland & Virginia

D.C. Stop Discrimination by Algorithms Act of 2023

Under this proposed act, employers would be required to conduct an annual discrimination audit, which must be conducted by a third party and reported to the city. Employers would also have a poster requirement informing employees about the law, as well as a required pop-up notice on certain systems.

The council included a wide definition of protected data that an algorithm might use. The bill addresses the use of IP addresses, equipment identification or MAC addresses, history of consumer purchases, geolocation data, education records, certain automobile records and more.

While employees already have protections from workplace discrimination, the purpose of this bill is to provide them new protections in relation to the data that algorithmic-based systems would use in making decisions to hire, promote or terminate an employee.

Maryland Proposes Committee on AI in Employment Decisions

A proposed bill in the Maryland House of Delegates would form a committee charged with studying and making recommendations toward the proper use of AI technology in a number of different sectors. The committee would especially be concerned with the use of AI in employment decisions and recommend regulations to the Maryland legislature that would assist in preventing systemic AI bias and keeping AI usage equitable, accountable, sustainable and responsible with respect to public resources. 

Potomac Legal Group is currently monitoring for any activity in the Virginia related to the regulation of AI in employment. 

Contact Us

Contact Potomac Legal Group to discuss any matter related to your employment, including any employment decision you believe was made by AI or is discriminatory.

Contact Us Today to Discuss Your Matter