The Legal Challenges of Artificial Intelligence in Decision-Making
- Lets Learn Law
- Oct 10
- 5 min read
Abstract
Artificial Intelligence (AI) has emerged as a transformative technology, reshaping decision making across multiple sectors, including finance, healthcare, human resources, law enforcement, and social media. AI systems leverage vast amounts of personal and behavioral data to generate predictions, automate processes, and optimize outcomes. While these applications promise efficiency and innovation, they raise critical legal, ethical, and regulatory challenges, including algorithmic bias, discrimination, lack of transparency, and the complexity of assigning liability when harm occurs.
This article examines the legal landscape governing AI decision-making, highlighting the differences in regulatory approaches between the European Union (EU), which prioritizes a comprehensive rights-based framework through the AI Act, the United States (US), which relies on sector-specific and fragmented regulation, and Morocco, where AI governance is emerging but still underdeveloped. The study also explores key issues such as algorithmic accountability, data protection, and cross-border enforcement challenges. Through case studies including Amazon’s AI recruitment tool and predictive policing algorithms, the article illustrates realworld consequences of AI deployment. Finally, the study offers recommendations for strengthening legal frameworks, enhancing corporate responsibility, and empowering users, with the goal of promoting ethical, transparent, and legally compliant AI systems.
Keywords: Artificial Intelligence, algorithmic bias, data protection, AI regulation, EU, US, Morocco, ethical AI, legal accountability.
Introduction
Artificial Intelligence (AI) has rapidly become one of the most significant technological developments of the 21st century, profoundly impacting economic, social, and legal systems worldwide. AI systems are increasingly relied upon to make decisions that affect individuals’ lives, from evaluating credit applications and hiring candidates to predicting criminal behavior and personalizing online services. These systems offer the promise of increased efficiency, consistency, and predictive accuracy. However, their growing use also introduces complex legal, ethical, and social challenges, raising questions about accountability, transparency, fairness, and compliance with data protection standards (Zuboff, 2019; UNESCO, 2021).
A primary concern is algorithmic bias, whereby AI systems replicate or amplify existing societal inequalities due to biased training data or flawed model design. Such bias can result in discriminatory outcomes in employment, lending, law enforcement, and other areas, undermining principles of equality and fairness. For example, Amazon’s AI recruitment tool was found to favor male candidates, leading to biased hiring decisions, while predictive policing algorithms in the United States have been criticized for disproportionally targeting racial minorities (Angwin et al., 2016). These examples demonstrate the real-world legal risks of AI deployment, highlighting the need for robust regulatory frameworks. Transparency and explainability represent additional challenges. Many AI systems function as “black boxes,” making it difficult for users or regulators to understand how decisions are generated. This opacity complicates the enforcement of legal rights, limits access to remedies, and raises ethical concerns regarding informed consent and individual autonomy. Furthermore, the use of AI often involves the processing of vast amounts of personal data, implicating privacy laws such as the EU’s General Data Protection Regulation (GDPR), the US California Consumer Privacy Act (CCPA), and Morocco’s Law No. 09-08 (Law No. 09-08, 2009; GDPR, 2016; CCPA, 2018).
Regulatory approaches to AI differ significantly across jurisdictions. The European Union emphasizes a comprehensive, risk-based regulatory framework, requiring transparency, human oversight, and accountability for high-risk AI applications. In contrast, the United States adopts a sectoral and fragmented approach, applying specific rules depending on the industry and leaving gaps in overall governance. Morocco, while increasingly recognizing the strategic importance of AI, is still developing legal and regulatory mechanisms, primarily focusing on data protection and digital policy frameworks. These variations highlight the challenges of regulating AI in a globalized context, where algorithms often operate across borders.
This article aims to provide a comprehensive analysis of the legal, ethical, and regulatory challenges associated with AI decision-making, with a focus on comparative approaches in the EU, US, and Morocco. By examining real-world cases, identifying gaps in regulation, and proposing solutions, the study seeks to inform policymakers, legal professionals, and corporate actors on how to ensure AI systems operate transparently, ethically, and in compliance with applicable laws. Ultimately, the article contributes to the ongoing discourse on AI governance, emphasizing the importance of balancing innovation with the protection of fundamental rights and societal values (OECD, 2023; UNESCO, 2021).
1. AI Decision-Making and Legal Risks
AI systems analyze large datasets to identify patterns and make predictions. While this capability offers advantages, it raises multiple legal issues:
• Bias and Discrimination: AI may reflect or amplify biases present in training data, leading to unfair outcomes in recruitment, lending, or law enforcement (European Commission, 2023).
• Transparency and Explainability: Black-box algorithms are difficult to interpret, making it challenging for individuals to understand decisions affecting them.
• Accountability and Liability: Determining legal responsibility is complex when AI causes harm or violates rights. Should liability fall on developers, operators, or organizations deploying AI?
• Data Protection: AI relies on large datasets, often containing sensitive personal information, raising privacy concerns under GDPR, CCPA, and Moroccan law.
Case studies highlight these risks. In 2018, Amazon abandoned an AI recruitment tool because it favored male candidates over female candidates, illustrating bias in automated decision-making. Similarly, predictive policing algorithms in the US have been criticized for racial bias (Angwin et al., 2016).
2. Comparative Legal Approaches
2.1 European Union (EU) The EU is leading in AI regulation with a risk-based framework under the AI Act:
• High-risk AI systems (e.g., recruitment, credit scoring, law enforcement) are subject to strict requirements: transparency, human oversight, and regular audits.
• Fundamental Rights Protection: EU law ensures AI does not violate privacy, equality, or anti-discrimination laws.
• Accountability Measures: Organizations deploying high-risk AI must maintain documentation and provide explanations for decisions.
2.2 United States (US)
The US approach is fragmented and sectoral:
• There is no comprehensive federal AI law. Instead, agencies provide guidelines and enforce sector-specific rules (e.g., Equal Employment Opportunity Commission for recruitment, FTC for consumer protection).
• AI liability is assessed under general tort or contract law, creating uncertainty for companies and users.
2.3 Morocco
Morocco is still in the early stages of AI governance, primarily through national digital strategies emphasizing AI ethics, innovation, and privacy. Legal frameworks are limited, focusing on data protection (Law No. 09-08) and emerging digital policies. Cross-border AI deployments remain a challenge for enforcement and compliance.
3. Challenges and Issues
• Algorithmic Bias: Data-driven decisions risk reinforcing societal inequalities.
• Opacity and Accountability: Lack of explainability limits legal recourse for affected individuals.
• Global Enforcement: AI platforms often operate internationally, complicating regulatory oversight.
• Ethical Concerns: AI may undermine human dignity, autonomy, or social trust if misused.
4. Recommendations
• Regulatory Measures: Adopt risk-based legislation, requiring high-risk AI systems to undergo audits and transparency assessments.
• Corporate Responsibility: Implement AI ethics policies, fairness audits, and human oversight mechanisms.
• User Empowerment: Provide access to explanations for decisions affecting individuals and avenues for appeal.
• International Cooperation: Harmonize standards to regulate cross-border AI applications.
Conclusion
AI is transforming decision-making across sectors, but its legal, ethical, and social implications demand careful regulation. Comparative analysis shows that the EU provides a comprehensive framework emphasizing accountability and fundamental rights, the US applies a sectoral, fragmented approach, and Morocco is gradually developing AI governance frameworks. Addressing algorithmic bias, transparency, accountability, and cross-border enforcement is crucial for ensuring AI systems are legally compliant, ethical, and socially beneficial. Policymakers, companies, and users must collaborate to create a balanced framework that fosters innovation while safeguarding rights.
REFERENCES
• Legal Liability and Accountability in AI Decision Making: Challenges and Solutions, Dr. Priyadarshi Nagda, Published: April 2025,https://ijirt.org/publishedpaper/IJIRT174899_PAPER.pdf
• Artificial intelligence at the bench: Legal and ethical challenges of informing or misinforming—judicial decision-making through generative AI, Published online by Cambridge University Press: 02 December 2024 by David Uriel Socol de la Osa and Nydia Remolina,https://www.cambridge.org/core/journals/data-and-policy/article/artificial-intelligence-at-the-bench-legal-and-ethical-challenges-of-informingor-misinformingjudicial-decisionmaking-through-generative-ai/D1989AC5C81FB67A5FABB552D3831E46
• THE IMPACT OF ARTIFICIAL INTELLIGENCE ON LEGAL DECISION MAKING, Published Dec 29, 2023, https://ojs.mruni.eu/ojs/international-comparative jurisprudence/issue/view/487 • European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 JUNE 2024, http://data.europa.eu/eli/reg/2024/1689/oj
• OECD. OECD Principles on Artificial intelligence , Paris : OECD,2019, https://oecd.ai/en/ai-principles
This article is authored by Safaa Fellah, Law Student from Morocco and Trainee of Lets Learn Law Legal Research Training Programme. The views and opinions expressed in this piece are solely those of the author.




Comments