top of page

The Impact of Artificial Intelligence on Increasing bias and discrimination

Introduction

The impact of Artificial Intelligence (AI) on human rights, noting that while AI can simplify tasks, it also perpetuates bias and discrimination, contradicting principles in key human rights documents. It points out that biased Machine Learning algorithms can exacerbate existing inequalities and contribute to job loss and reduced human interaction. 


The European Commission's ‘Draft Ethics Guidelines of Trustworthy AI’ are mentioned, emphasizing the risks associated with relying on AI due to its lack of morality and ethical principles. A specific example includes Amazon's abandoned algorithm that favored male candidates. The need for responsibility and accountability in AI use is highlighted, as tech companies often prioritize profit over equality, leading to ethnic and moral injustices. The paper aims to examine how companies like Google and Apple Inc. utilize AI to cut costs and boost profits, potentially undermining equality rights.


Artificial Intelligence: Transforming Sectors, Changing Perspectives.

Artificial Intelligence (AI) drives innovation and productivity across industries but requires responsible implementation with ethical considerations. Its use in facial recognition technology within criminal justice raises concerns, as it can amplify existing biases and disproportionately affect marginalized communities, necessitating discussions on social inequalities exacerbated by machine learning algorithms.


Facial Recognition: Facial Recognition (FR) typically utilizes machine learning algorithms to categorize individuals based on efficiency and scale. These algorithms, often referred to as trained models, are provided with extensive datasets of labeled information, enabling them to classify and identify various elements based on their associated features. 


For instance, The system effectively distinguishes between cars and buildings, classifying specific types based on training quality. Classification accuracy improves with larger datasets, though FR applications often face issues with imbalanced object representations.

Google photos, which is an advanced recognition software, categorized two black people as gorillas.6 This happened because the dataset was filled with a disproportionate number of white men and the applications provided relatively inaccurate results. 


Artificial intelligence interprets images through labeled datasets, but both humans and algorithms can make biased decisions influenced by factors like sexism and racism. Machine learning struggles with racial classification due to a lack of diverse data, mainly featuring white males. Additionally, a report from ‘The Intercept’ highlighted that US military drone strikes post-9/11 often mistakenly killed nine out of ten unintended targets, demonstrating the challenges machines face in making accurate distinctions.


Predictive Analysis: Predictive analysis (PA) involves using AI or machine learning algorithms to analyze historical data for forecasting trends and understanding customer behavior. In manufacturing, this allows for anticipating market needs, optimizing inventory, and managing resources effectively to meet customer expectations and stay ahead of competitors. In the article Amnesty International USA Considers using Big Data to predict Human Rights Violations by Mohana Ravindranath, it was established that the current interpretation of data as part of the predictive analysis is capable of reaching a crisis point that would affect the rights of individuals because it is not progressive and advanced to interpret data and simultaneously preserve confidentiality or sensitivity of the data acquired and consent for the use of data mining. 


AI-driven predictive analysis in lending and financial services raises significant concerns regarding inequality. Machine learning algorithms employed by banks are often influenced by historical biases related to race and gender, which can lead to the reinforcement of societal inequalities and increased wealth gaps. A prominent case is Amazon's 2018 recruitment tool, which was discarded after it showed biased preferences for men due to training on data reflective of a male-dominated industry. This tool was found to reject resumes with female-associated terms, showcasing how historical bias can result in flawed AI predictions.


Artificial Intelligence Controlled by Technology Leaders: Addressing Human Rights Concerns

We are on the brink of significant change, as automation through robots and machines will dominate our future. Leading tech companies implement AI in ways like machine learning and data analysis. Natasha E. Bajema warns in her article "Beware the Jabberwocky: The Artificial Monsters are Coming" that automation aims to enhance the reliability of mutually assured destruction. While AI offers many advantages, it also poses several disadvantages, particularly concerning individual rights outlined in international treaties.


Apple Inc., a leading technology company, faced scrutiny in 2017 over its Face ID feature when an incident revealed potential racial bias in facial recognition, particularly concerning Asian faces. Despite claims by Vice President Cynthia Hogan of diverse participant collaboration, the outcomes indicated a lack of representation in training data. This situation underscores the necessity for comprehensive datasets in AI development, even for prominent firms like Apple.


The deployment of AI by major technology companies impacts employment rights, with individuals from less privileged backgrounds facing greater vulnerability to job loss. A study by Oxford researchers indicates that these individuals may struggle more with automation, as higher-paying jobs often necessitate a college education and human judgment, raising concerns about systemic discrimination and socioeconomic barriers that limit their opportunities.


The right to work and protection against unemployment is guaranteed under Article 23 of UDHR and Article 6 of the ICESR. Denial of employment is a violation of these international principles and it is a good bet to believe that AI does that without any hesitation.


In 2017, Changying Precision Technology dramatically reduced its manual workforce by 90% through automation, resulting in a 250% increase in production and an 8% decrease in defects. This shift has created fewer job opportunities for skilled workers, leading to employment driven more by financial necessity than productivity. A similar automation approach by Adidas has also led to job losses for lower-skilled workers and heightened job polarization. The trend is reflected in major tech companies outsourcing complex tasks to AI, raising human rights concerns as global AI investment is expected to triple by 2019, with implications for the tech industry's impact on human rights.


AI exacerbates wealth disparities and discrimination, especially through targeted online ads by companies like Facebook and Google. An example from 2013 shows that searches for African-American sounding names often resulted in misleading ads related to past arrests, while such ads were less common for white-sounding names, indicating racial bias. The opacity of AI systems complicates the identification of discrimination and its root causes, which may violate individuals' rights against discrimination. Additionally, the Dutch Data Protection Authority revealed that Facebook permitted advertisers to target users based on sensitive personal characteristics.


Reforming Artificial Intelligence: Addressing Human Rights Abuses by Technology Corporations

Currently, there are no specific laws regulating AI in India. The NITI Aayog has proposed seven principles, including safety and privacy, to protect public interest. A committee by the Ministry of Electronics and Information Technology is set to oversee AI regulation, particularly regarding its impact on rights like privacy and dignity. Additionally, the New Education Policy promotes coding education from class VI, aiming to establish India as a leader in advanced AI technologies, emphasizing the need for ethical standards in technological advancements.


Data protection in India is vital for upholding fundamental rights like equality. Personal data must be handled fairly, transparently, and with consent, and only for specific purposes while ensuring accuracy and minimal storage duration. The Supreme Court emphasizes non-discrimination as a constitutional principle, noting that AI could unintentionally lead to bias. Industries, especially tech companies, must adopt responsible AI practices to protect individual rights and avoid discrimination.


Conclusion 

As Artificial Intelligence evolves, its implications for human rights and societal equity become increasingly significant. While AI promotes innovation, it can also exacerbate biases in historical data, leading to violations of rights such as equality and freedom from discrimination. Case studies from companies like Amazon, Apple, and Facebook illustrate these issues. The lack of regulations calls for urgent AI governance reform, prompting tech companies, policymakers, and civil society to prioritize ethical considerations. Emphasizing transparency, accountability, and inclusivity is essential to ensure AI aligns with values of equality and justice.


Reference 

  1. Mittellstadt, Brent et al. (2017), “The Ethics of Algorithms: Mapping the Debate”, 3 Big Data & Soc'y 2.

  2. Cataleta, Maria Stefania. “Humane Artificial Intelligence: The Fragility of Human Rights Facing AI.” East-West Center, 2020. http://www.jstor.org/stable/resrep25514.


  1. Jeffrey Dustin, Amazon scraps secret AI recruiting tool that showed bias against women, REUTERS (Oct. 11, 2018, 4:34 AM), https://www.reuters.com/article/us-amazon-com- jobs-automation-insight-idUSKCN1MK08G 


  1. Now, Access. "Human rights in the age of artificial intelligence." Access Now (2018).


  1. Leslie, David. "Understanding bias in facial recognition technologies." arXiv preprint arXiv:2010.07023 (2020).


  1. Palmiotto, Francesca, and Natalia Menéndez González. "Facial recognition technology, democracy and human rights." Computer Law & Security Review 50 (2023): 105857.


  1. Bhavya Kaushal, Being sensible with AI: Why tech companies need to be careful with artificial intelligence, BUSINESS TODAY MAGAZINE , https://www.businesstoday.in/magazine/technology/story/being-sensible-with-ai-why- tech-companies-need-to-be-careful-with-artificial-intelligence-358299-2022-12-30 


  1. Sophie Curtis, iPhone X racism row: Apple's Face ID fails to distinguish between Chinese users, MIRROR, (Dec. 22, 2017, 12: 06. PM), https://www.mirror.co.uk/tech/apple- accused-racism-after-face-11735152 

  2. Frey, Carl Benedikt, and Michael A. Osborne. "The future of employment: How susceptible are jobs to computerization?." Technological forecasting and social change 114 (2017): 254-280.

  3. Mihai Andrei, Chinese factory replaces 90% of human workers with robots. Production rises by 250%, defects drop by 80%, ZMZ SCIENCE, (Feb. 3, 2017), https://www.zmescience.com/other/economics/china-factory-robots-03022017/ 


  1. SHESTAKOFSKY, BENJAMIN. “MORE MACHINERY, LESS LABOR?” Berkeley Journal of Sociology 59 (2015): 86–91. http://www.jstor.org/stable/44713549. 


This article is authored by Mohammad Adil, who was among the Top 20 performers in the Quiz Competition on Mergers & Acquisitions organized by Lets Learn Law. The views and opinions expressed in this piece are solely those of the author.


 
 
 

Comments


bottom of page