Frank Pasquale (Professor of Law at Brooklyn Law School and world famous legal scholar, author of The Black Box Society and New Law of Robotics) and Gianclaudio Malgieri (Associate Professor at EDHEC Augmented Law Institute) have presented their co-authored paper on the notion of “AI Justification”. Their proposed regulatory model goes beyond traditional “ex post model” (notice, consent, contest) and opens to an ex ante justificatory model.

Building both on the connection between Articles 5 and 22 of the GDPR and on the new proposed regulatory framework for AI in the proposed EU AI Act, the two professors have proposed a “licensing model” for high-risk AI. To avoid circumventions and mitiage risks of troublesome AI, the authors propose a model of “AI unlawfulness until proven otherwise”: according to this proposal, a regulator should give a pre-authorisation (after a self-justification notice) to companies that want to produce and commercialise high-risk AI systems.

The EU AI Act seems to go in this more “ex ante” direction, but with some limits and challenges (e.g., limited list of risks; limited sanctions in case of non-justification of high-risks).