The regulation of automated decision-making (hereinafter: ADM) in the GDPR is a topic of vivid discussion. If the first commentators focused mostly on the existence of a right to an explanation in the body of the Regulation, the following discussion has focused more on how to reach a good level of explainability or, even better, a good level of algorithmic accountability and fairness.
The GDPR has tried to provide a solution to risks of automated decision-making through different tools: a right to receive/access meaningful information about logics, significance and envisaged effects of automated decision-making processes (Articles 13(2), lett. f; 14(2), lett. g; and 15(1), lett. h); the right not to be subject to automated decision making (Article 22) with several safeguards and restrains for the limited cases in which automated decision making is permitted.
Article 22(1) states as follows: “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. This right shall apply almost always in case of sensitive data (Art. 22(4)). For other personal data, shall not apply in only three cases: the decision is authorised by Union or Member State law; it is necessary for a contract, or is based on the data subject’s explicit consent (Art. 22(2)).
In these last two cases, “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision” (Art. 22(4)). In addition, recital 71 explains that such suitable safeguards “should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”.
The interpretation of the GDPR rules about automated decision-making has generated a vivid debate in legal literature. Several authors have interpreted this net of provisions as a new right to algorithm explanation; while other scholars have adopted a more sceptical approach analysing limits and constraints of the GDPR provisions and concluding that the data subjects rights are more limited than expected and that there is no right to explanation. In addition, other scholars have preferred a contextual interpretation of Articles 13(2)(f), 14(2)(g), 15(1)(h) and 22 of the GDPR, suggesting that the scope of those provisions is not so limited and that they actually can provide individuals with more transparency and accountability. This last view was also partially confirmed by Article 29 Working Party, which has released some guidelines on profiling and automated decision making.
The GDPR (and in particular the provisions in Article 22 and recital 71) are often interpreted as referring to only “one” kind of explanation. Actually, there is no unique explanation in practice: each form of explanation highly depends on the context at issue. More importantly, the capability to give a fair and satisfactory explanation depends also on the possibility to show causal link between the input data (and in particular some crucial factors within the input information) and the final decision. However, this is not always possible: while for traditionally data-based decision-making it might be easier to give adequate explanations, addressing the causes, the determining factors and the counterfactuals; in more complex AI-based decisions it might be hard to reach this high level of explainability. Indeed, looking at the quick development of deep learning in different forms of automated decisions (even COVID-19 automated diagnosis based on, e.g., lungs images), explaining the specific reasons and factors of an individual decision might be nearly impossible. An explanation which is neither causal, nor contextual is perhaps inadequate to show to the data subject eventual grounds for challenging the decision and then unsuitable under Article 22(3) of the GDPR.
These last considerations may lead to an insurmountable dichotomy: either we prohibit more technologically advanced and inscrutable decision-making systems because they cannot comply with the GDPR explainability requirements; or we tolerate AI-based decision-making systems that do not formally respect the transparency duties in the GDPR.
In addition, explanations are not only sometimes problematic, but also not sufficient to make AI socially and legally “desirable”. In particular, several scholars reflected upon the “transparency fallacy” of algorithmic explanation, i.e. the risk that even a meaningful explanation could be not effectively received or understood by the data subjects (due to its technical nature or to the limited attention, interests or – even temporary – cognitive capabilities of the data subject).
To overcome the abovementioned limits of AI explanation, a possible solution might be to look at the broader picture of the GDPR. Article 22(3) and recital 71, when mentioning the possible measures to make automated decisions more accountable, do not address only the right to an individual explanation, but several other complementary tools (e.g., right to contest, the right to human involvement and algorithmic auditing). In particular, there are several principles and concepts that might influence the interpretation of accountability duties also in case of algorithmic decision-making: the fairness principle (Article 5(1)(a)), the lawfulness principle (Article 5(1)(a)), the accuracy principle (Article 5(1)(d)), the risk-based approach (Articles 24, 25, 35), the data protection impact assessment model (Article 35).
Also looking at these last provisions, a justification of automated decisions taken is not only more feasible but also more useful and desirable than an explanation of the algorithm. Justifying a decision means not merely explaining the logic and the reasoning behind it, but also explaining why it is a legally acceptable (correct, lawful and fair) decision, i.e. why the decision complies with the core of the GDPR and is, thus, based on proportional and necessary data processing, using pertinent categories of data and relevant profiling mechanisms.
While some scholars have already addressed the need for justification of automated decision-making (rather than a mere need for explanation), very few authors tried to clarify what this ADM justification should be and how it should be conducted under the GDPR rules. This article argues that, considering the meaning of “legal justification” as mentioned in the previous sections, justifying an algorithmic decision should lead to prove the legality of that decision. For “legality”, we mean not just lawfulness, but also accountability, fairness, transparency, accuracy, integrity, necessity, i.e. all data protection principles in Article 5 of the GDPR.
In the last years, scholars have called for fair algorithms, or for accountable algorithms or for transparent algorithmic decisions or, again, for lawful, accurate and integrous automated decisions. Justifying ADM means calling for algorithmic decision processes that prove to have all the aforementioned characteristics and respect the essence or the core of data protection. The author argues that the essence of data protection in the GDPR consists in the data protection principles in Article 5. Accordingly, justifying automated decisions means proving that they comply (or adjusting them in order to comply) with data protection principles in Article 5. Interestingly, the principles of data protection seem to lead to the desirable characteristics of automated decision-making as mentioned above.
In conclusion, according to the author, the GDPR already proposes a sustainable environment of desirable ADM systems, which is broader than any ambition to have “transparent” ADM or “explainable” ADM or “fair” ADM or “lawful” ADM or “accountable” ADM: we should aspire to just algorithms, i.e. justifiable automated systems that include all the above mentioned qualities (fairness, lawfulness, transparency, accountability, etc.).
This might be possible through a practical “justification” process and statement through which the data controller proves in practical ways the legality of an algorithm, i.e., the respect of all data protection principles (fairness, lawfulness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, accountability). This justificatory approach might also be a solution to many existing problems in the AI explanation debate: e.g., the difficulty to “open” black boxes, the transparency fallacy, the legal difficulties to enforce a right to receive individual explanations.
 Bryce Goodman and Seth Flaxman, ‘EU Regulations on Algorithmic Decision-Making and a “right to Explanation”’  arXiv:1606.08813 [cs, stat] http://arxiv.org/abs/1606.08813 , accessed 30 June 2018.
 See, e.g Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For’, , 16 Duke Law & Technology Review 18. Available at SSRN: https://ssrn.com/abstract=2972855
 Sandra Wachter, Brent Mittelstadt, Luciano Floridi; ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’, , International Data Privacy Law, Volume 7, Issue 2, 76–99, https://doi.org/10.1093/idpl/ipx005
 Andrew D Selbst, Julia Powles; Meaningful information and the right to explanation,  International Data Privacy Law, Volume 7, Issue 4, 233–242, https://doi.org/10.1093/idpl/ipx022; Gianclaudio Malgieri, Giovanni Comandé; ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation’,  International Data Privacy Law, Volume 7, Issue 4, 243–265, https://doi.org/10.1093/idpl/ipx019. See also Margot E. Kaminski, ‘The Right to Explanation, Explained’, , 34 Berkeley Tech. L.J. 189.
 Article 29 Working Party, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’, WP251rev.01, adopted on 3 October 2017, as last Revised and adopted on 6 February 2018; Lilian Edwards and Michael Veale, ‘Slave to the Algorithm?’, 2017.
 Tim Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’ (2019) 267 Artificial Intelligence 1; Andrew D Selbst and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87 Fordham Law Review 1085.
 Clement Henin and Daniel Le Métayer, ‘A Multi-Layered Approach for Interactive Black-Box Explanations’ 38.
 See Roman Hanon, Henrik Junklewitzh, Gianclaudio Malgieri, Ignacio Sanchez, Laurent Beslay, Paul De Hert, “Impossible explanations? Beyond explainable AI in the GDPR from a COVID-19 use case scenario”, forthcoming, on file with the authors.
 Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18; Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’ (2018) 16 IEEE Security & Privacy 46.
 See, e.g., Gianclaudio Malgieri and Jedrzej Niklas, ‘The Vulnerable Data Subject’ (2020) 37 Computer Law & Security Review.
 Sandra Wachter and Brent Mittelstadt, ‘A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI’, Columbia Business Law Review, 2019(2), https://papers.ssrn.com/abstract=3248829; Kaminski, ‘Binary Governance’, 12–17.
 Future of Privacy Forum, ‘Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making’ (2017) <https://fpf.org/2017/12/11/unfairness-by-algorithm-distilling-the-harms-of-automated-decision-making/> accessed 8 February 2020; Sainyam Galhotra, Yuriy Brun and Alexandra Meliou, ‘Fairness Testing: Testing Software for Discrimination’, Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering – ESEC/FSE 2017 (ACM Press 2017) <http://dl.acm.org/citation.cfm?doid=3106237.3106277> accessed 31 May 2019; Andrew D Selbst and others, ‘Fairness and Abstraction in Sociotechnical Systems’, Proceedings of the Conference on Fairness, Accountability, and Transparency (ACM 2019) <http://doi.acm.org/10.1145/3287560.3287598> accessed 16 September 2019.
 Joshua Kroll and others, ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633.
 Bruno Lepri and others, ‘Fair, Transparent, and Accountable Algorithmic Decision-Making Processes’ (2018) 31 Philosophy & Technology 611; Bilyana Petkova and Philipp Hacker, ‘Reining in the Big Promise of Big Data: Transparency, Inequality, and New Regulatory Frontiers’  Lecturer and Other Affiliate Scholarship Series <https://digitalcommons.law.yale.edu/ylas/13>; Mireille Hildebrandt, ‘Profile Transparency by Design? Re-Enabling Double Contingency’ <https://works.bepress.com/mireille_hildebrandt/63/> accessed 3 January 2019.
 Here essence is used in a general sense, we don’t refer to the ‘essence’ of the fundamental right to data protection as interpreted by the EU Court of Justice in application of Article 52 of the Charter. For a specific analysis about this topic see Maja Brkan, ‘The Essence of the Fundamental Rights to Privacy and Data Protection: Finding the Way Through the Maze of the CJEU’s Constitutional Reasoning’ (2019) 20 German Law Journal 864.