Unethical AI examples in this context include Amazon's gender-biased recruiting algorithm, which was found to prefer male candidates over female ones. Such cases illustrate how AI, when not developed and monitored ethically, can reinforce discrimination rather than mitigate it.
Examples of AI bias in real life Healthcare—Underrepresented data of women or minority groups can skew predictive AI algorithms. For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.
“When women use some AI-powered systems to diagnose illnesses, they often receive inaccurate answers, because the AI is not aware of symptoms that may present differently in women.”
For instance, a discriminative AI might determine in image recognition whether a picture contains a cat or a dog. This classification ability makes discriminative AI invaluable in various sectors, including healthcare for diagnostic tools, finance for fraud detection, and retail for customer preference analysis.
Report discrimination to a local Fair Employment Practices Agency (FEPA). If the discrimination breaks both a state and federal law, the FEPA will also send your complaint to the EEOC. Use the EEOC's directory of field offices to find the FEPA near you.
What are the three sources of bias in AI? Researchers have identified three types of bias in AI: algorithmic, data, and human. Algorithmic bias refers to biases that occur within the design and implementation of an algorithm.
One of the primary concerns with AI in employment is the opacity of decision-making processes. Employers should be required to disclose the criteria and algorithms used in AI systems. This transparency allows for external auditing and ensures that the AI systems comply with anti-discrimination laws.
An example is when a facial recognition system is less accurate in identifying people of color or when a language translation system associates certain languages with certain genders or stereotypes.
Biases in AI can lead to discriminatory practices that are illegal. Those biases can disproportionately impact marginalized groups, resulting in inequities in areas such as: Hiring practices.
For example, an organization's AI screening tool was found to be biased against older applicants when a candidate that had been rejected landed an interview after resubmitting their application with a different birthdate to make themselves appear younger.