Automated Decision-Making: A Comparative Analysis between the European GDPR and the Emirati Federal Decree-Law No. 45 of 2021

Authors

DOI:

https://doi.org/10.36394/jls.v23.i1.28

Keywords:

Automated decision-making, data protection, GDPR, Federal Decree-Law No. 45/2021.

Abstract

In the digital age, the increasing use of automated decision-making systems by businesses and public institutions raises major challenges regarding personal data protection and fundamental rights. This study offers a comparative analysis of the legal framework governing automated decision-making between the European General Data Protection Regulation (GDPR) and the UAE Federal Decree-Law No. 45 of 2021 on personal data protection, highlighting the different approaches adopted by these two texts to regulate the use of automated decision-making systems. The research examines in depth the convergences and divergences between these two regulatory frameworks, particularly regarding the guarantees offered to data subjects and the obligations imposed on data controllers in this context. To achieve this, we began by defining the contours of the concept of automated decision-making and the challenges it poses, before studying the guarantees and obligations provided by the European GDPR and UAE Decree-Law No. 45.

References

AlgorithmWatch. (2019). Automating society, Available at: https://algorithmwatch.org/en/automating-society-2019/, accessed on 11 November 2024.

Alhazmi, A., & Arachchilage, N. A. G. (2020). Why are developers struggling to put GDPR into practice when developing Privacy-Preserving software systems. arXiv preprint arXiv:2008.02987.

Artzt, M., & Dung, T. V. (2022). Artificial intelligence and data protection: How to reconcile both areas from the European law perspective. Vietnamese Journal of Legal Sciences7(2), 39-58. https://doi.org/10.2478/vjls-2022-0007.

Ben Yacoub H.. Du réel au virtuel, l'art de poser les bonnes questions

Bensamoun, A., & Loiseau, G. (2017). L'intégration de l'intelligence artificielle dans l'ordre juridique en droit commun: questions de temps. Dalloz IP/IT: droit de la propriété intellectuelle et du numérique, (04), 239-243.

Besse, P., Castets-Renard, C., Garivier, A., & Loubes, J. M. (2018). L’IA du quotidien peut-elle être éthique?. Statistique et Société6(3), 9-31.

Bhutta, N., Hizmo, A., & Ringo, D. (2022). How much does racial bias affect mortgage lending? Evidence from human and algorithmic credit decisions, accessible sur: https://www.federalreserve.gov/econres/feds/files/2022067pap.pdf.

Borch, C. (2016). High-frequency trading, algorithmic finance and the Flash Crash: reflections on eventalization. Economy and Society45(3-4), 350-378.

Brkan, M., & Bonnet, G. (2020). Legal and technical feasibility of the GDPR’s quest for explanation of algorithmic decisions: of black boxes, white boxes and fata morganas. European Journal of Risk Regulation11(1), 18-50.

Bu, F., Wang, N., Jiang, B., & Liang, H. (2020). “Privacy by Design” implementation: Information system engineers’ perspective. International Journal of Information Management53, 102124.

Castets-Renard, C. (2016). Brève analyse du règlement général relatif à la protection des données personnelles. Dalloz IP/IT : droit de la propriété intellectuelle et du numérique, (07 et 08), 331.

Castets-Renard, C. (2016). Brève analyse du règlement général relatif à la protection des données personnelles. Dalloz IP/IT: droit de la propriété intellectuelle et du numérique, (07 et 08), 331.

Castets-Renard, C. (2018). Régulation des algorithmes et gouvernance du machine learning : vers une transparence et « explicabilité » des décisions algorithmiques ?. Revue Droit & Affaires, Revue Paris II Assas, 15ème édition.

Castets-Renard, C. (2019). Accountability of algorithms in the GDPR and beyond: a European legal framework on automated decision-making. Fordham Intell. Prop. Media & Ent. LJ30, 91.

Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. Information and privacy commissioner of Ontario, Canada5, 12.

Chemlali, L., Salmi, A., & Benseddik, L. (2023). A reflection on the UAE's new data protection law: A comparative approach with GDPR. Journal of Data Protection & Privacy6(1), 24-36.

Christodoulou, P., & Limniotis, K. (2024). Data Protection Issues in Automated Decision-Making Systems Based on Machine Learning: Research Challenges. Network4(1), 91-113.

Citron, D. K. (2007). Technological due process. Wash. UL Rev.85, 1249.

Colesky, M., Hoepman, J. H., & Hillen, C. (2016, May). A critical analysis of privacy design strategies. In 2016 IEEE security and privacy workshops (SPW) (pp. 33-40). IEEE.

Comité Européen sur la Protection des Données (CEPD), Lignes directrices 5/2020 sur le consentement au sens du règlement (UE) 2016/679, version 1.1, Adoptées le 4 mai 2020, disponible sur : https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_fr.pdf.

Dary, M., & Benaissa, L. (2016). Privacy by Design: un principe de protection séduisant mais complexe à mettre en œuvre. Dalloz IP/IT2016(10).

Décret-loi fédéral n° 45/2021 relatif à la protection des données à caractère personnel (n° 712, p. 15). Gazette officielle.

Deltorn, J. M. (2017). La protection des données personnelles face aux algorithmes prédictifs. Revue des droits et libertés fondamentaux, 1-18.

Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. Duke L. & Tech. Rev.16, 18.

Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. Duke L. & Tech. Rev.16, 18.

Explainable AI: the basics, (The Royal Society: London,  2019),  https://ec.europa.eu/futurium/en/system/files/ged/ai-and-interpretability-policy-briefing_creative_commons.pdf.

Fischer, J. E., Greenhalgh, C., Jiang, W., Ramchurn, S. D., Wu, F., & Rodden, T. (2021). In‐the‐loop or on‐the‐loop? Interactional arrangements to support team coordination with a planning agent. Concurrency and Computation: Practice and Experience33(8), e4082.

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine38(3), 50-57.

Grant, D.G., Behrends, J. & Basl, J. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making. Philos Stud (2023). https://doi.org/10.1007/s11098-023-02013-6.

Groupe de travail « Article 29 » sur la protection des données. (2017). Lignes directrices concernant l'analyse d'impact relative à la protection des données (AIPD) et la manière de déterminer si le traitement est « susceptible d'engendrer un risque élevé » aux fins du règlement (UE) 2016/679. WP 248 rév.01.

Groupe de travail « Article 29 » sur la protection des données. (2017). Avis 2/2017 sur le traitement des données sur le lieu de travail, disponible sur : https://ec.europa.eu/newsroom/article29/items/610169/en.

Gürses, S., Troncoso, C., & Diaz, C. (2011). Engineering privacy by design. Computers, Privacy & Data Protection14(3), 25.

Hurley, M., & Adebayo, J. (2016). Credit scoring in the era of big data. Yale JL & Tech.18, 148.

Ivanov, S. H. (2023). Automated decision-making. foresight25(1), 4-19.

Kaminski, M. E. Binary Governance: Lessons from the GDPR’s approach to Algorithmic Accountability’ (2019). Southern California Law Review92, 1529.

Kaminski, M. E. The right to explanation, explained (2019). Berkeley Technology Law Journal, 34(1), 189-215.  https://doi.org/10.15779/Z38TD9N83H

Kaminski, M. E., & Urban, J. M. (2021). The right to contest AI. Columbia Law Review121(7), 1957-2048.

Kroener, I., & Wright, D. (2014). A strategy for operationalizing privacy by design. The Information Society30(5), 355-365.

Lignes directrices relatives à la prise de décision individuelle automatisée et au profilage aux fins du règlement (UE) 2016/679, version révisée du 6 février 2018, WP251rev.01, Accessible sur : https://ec.europa.eu/newsroom/article29/items/612053.

Lukács, A., & Váradi, S. (2023). GDPR-compliant AI-based automated decision-making in the world of work. Computer Law & Security Review50, 105848.

 Markus, Aniek F., Jan A. Kors, and Peter R. Rijnbeek. "The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies." Journal of biomedical informatics 113 (2021) : 103655.

Mendoza, I., & Bygrave, L. A. (2017). The right not to be subject to automated decisions based on profiling. EU internet law: Regulation and enforcement, 77-98.

Parlement européen et Conseil de l'Union européenne. (2016). Règlement (UE) 2016/679 du Parlement européen et du Conseil du 27 avril 2016 relatif à la protection des personnes physiques à l'égard du traitement des données à caractère personnel et à la libre circulation de ces données (règlement général sur la protection des données). Journal officiel de l'Union européenne, L 119, 1-88. https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=CELEX%3A32016R0679.

Prince, A. E., & Schwarcz, D. (2019). Proxy discrimination in the age of artificial intelligence and big data. Iowa L. Rev.105, 1257.

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., ... & Wellman, M. (2019). Machine behaviour. Nature568(7753), 477-486.

Raimondo, L. (2021). La protection des données personnelles en 100 Questions/Réponses. Editions Ellipses.

Selbst, A., & Powles, J. (2018, January). “Meaningful information” and the right to explanation. In conference on fairness, accountability and transparency (pp. 48-48). PMLR.

Selbst, Andrew D., and Solon Barocas. "The intuitive appeal of explainable machines." Fordham L. Rev. 87 (2018): 1085.

Spann, M., Bertini, M., Koenigsberg, O., Zeithammer, R., Aparicio, D., Chen, Y., ... & Yoo, H. (2024). Algorithmic Pricing: Implications for Consumers, Managers, and Regulators (No. w32540). National Bureau of Economic Research.

Spiekermann, S. (2012). The challenges of privacy by design. Communications of the ACM55(7), 38-40.

Published

2026-03-31