If it seems you woke up one day and suddenly AI was everywhere, you are not alone. Seemingly overnight, the world was inundated with talks about AI-generated art, AI being used for security purposes, and the popular social media app TikTok having one of the most effective and addictive algorithms. However, looks can be deceiving. The use of AI began in the healthcare field in the early 1970’s with an AI program, MYCIN, being used to help identify blood infection treatments. The proliferation of AI in the healthcare sector continued to grow, and by 1979, the American Association for Artificial Intelligence was formed. Within only ten years, the growth in AI helped achieve medical advancements such as ensuring more precise surgical procedures, producing faster data collection and procession, and more.
But AI’s coming-of-age story did not stop at the healthcare industry. AI has grown substantially since its introduction in the healthcare field. Today, AI can pilot drones, beat humans in video games, and even drive our cars for us. People see so much potential in AI that, according to Pew Research in 2014, 48% of experts surveyed believed that AI/digital agents will displace a significant amount of blue-collar and white-collar jobs by 2025. So, what exactly is AI, and how can a robot meant to be objective also be discriminatory?
In its simplest form, AI is designed to mimic human intelligence to perform tasks. More specifically, AI are machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention. To understand how a machine meant to be objective can also be discriminatory, it is essential to understand that AI can only be as unbiased as its creator. Unfortunately, it seems the phrase “the apple doesn’t fall far from the tree” applies to AI just as it does to humans. For example, in the very same field that started AI’s rise to the mainstream, healthcare, a study showed that an algorithm used to determine which patients needed care required African American patients to be deemed much sicker than their Caucasian counterparts to be recommended for the program. While this algorithm did not do this based on race, it was based on historic healthcare spending, which inherently reflects a history where African American patients had less money to spend on healthcare than Caucasians.
United States Assistant Attorney General Kristen Clarke of the Civil Rights Division has already started to ring the alarm regarding the dangers of employers blindly relying on AI during recruitment. “The use of AI is compounding the longstanding discrimination that job seekers with disabilities face.” In the effort to streamline the hiring process, employers have begun relying on AI tools such as resume scanners and video interviewing software that measures a person’s speech patterns and sometimes even facial expressions. Unfortunately, while tools such as these are used to find the best candidate for a particular job, it often has the unintended consequence of screening out people with speech impediments and people with Autism, along with other mental or physical impairments or differences.
As AI becomes increasingly integral to the average American’s day-to-day existence, governments are becoming increasingly aware of AI’s propensity for ‘algorithmic discrimination.’ On November 10, 2021, New York City became one of the first jurisdictions to pass a bill, the Automated Employment Decision Tool Law (“AEDT”), which is currently set to become effective on April 15, 2023. The bill is one of the first to place compliance obligations on the employers who utilize the AI tools rather than the software engineers who created the tools themselves.
The phrase ‘automated employment decision tools’ under the current proposed rules is defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation” that is used to “substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
AEDT requires employment agencies to (1) conduct annual bias audits of the AI tool; (2) provide a publicly available summary of the audit and the distribution date of the tool; (3) give notice to employment candidates; and (4) make available information about the source and type of data collected by the tool and the employer’s data retention policy, unless disclosure of this kind would violate the law or interfere with an ongoing law enforcement investigation.
Companies who use AI resources for their hiring will be required to provide notices, at least ten days before using the tool, to candidates and employees residing in New York City that (1) an automated employment decision tool will be used to access their application or employment; (2) the specific qualifications and characteristics that the tool will be assessing; and (3) that the candidate or employer may request an “alternative selection procedure” or “accommodation.”
While New York City is not alone in passing legislation to combat the potential dangers of living in an AI world (states such as California and Connecticut have also proposed bills) state and local efforts can only do so much. Federal legislation must be proposed to protect those who may be negatively impacted by employers, hospitals, and private companies utilizing AI.
If you believe that your employer or potential employer has failed to disclose their use of AI tools or has misused AI tools inappropriately in their hiring process or assessment of your job performance, contact Borrelli & Associates, P.L.L.C. to schedule a free consultation through one of our websites, www.employmentlawyernewyork.com, www.516abogado.com, or any of our phone numbers: (516) 248-5550, (516) ABOGADO, or (212) 679-5000.
The U.S. Department of Labor (DOL) recently faced a significant legal setback as a federal…
If you’ve been offered a severance agreement, chances are you’re dealing with a challenging situation.…
May 2024 Valdez et al. v. Michpat & Fam, LLC d/b/a Dairy Queen Grill &…
New Action filed in the United States District Court Southern District of New York On…
Workers’ compensation is designed to protect employees who are injured on the job. It provides…
January 2024 Hiciano et al. v. Joyeria Elizabeth I, Corp., et al. Docket No: 21-cv-4508…