European Law Blog has recently published an article by Gianclaudio Malgieri and Marcello Ienca, with the title “The EU regulates AI but forgets to protect our mind“. Below the article.
After the publication of the European Commission’s proposal for a Regulation on Artificial Intelligence (AI Act, hereinafter: AIA), in April 2021, several commentators have raised concerns or doubts about that draft. Notably, on the 21st of June the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) released a Joint Opinion on the AIA, suggesting many changes to the European Commission’s proposal.
We consider the AIA one of the most advanced and comprehensive attempts to regulate AI in the world. However, we concur with the EDPB and the EDPS that the AIA, in its current version, presents several shortcomings. We identify three main areas where amendment is needed: (i) the regulation of emotion recognition, (ii) the regulation of biometric classification systems and (iii) the protection against commercial manipulation. Partly building upon the thoughtful and comprehensive joint opinion of the European supervisory authorities, we will try to make a case for greater regulatory focus on the nexus between artificial intelligence and the human mind.
The proposed AI Act: a risk-based approach
Before addressing this issue in greater detail, let us first clarify the scope of the AIA. The proposal aims at defining different areas of intervention for AI systems according to their level of risk: a) applications prohibited because they cause unbearable risks to fundamental rights and freedoms; b) high-risk applications, i.e. applications that are not prohibited but subject to specific conditions to manage the risks; c) limited risk and other negligible risk applications.
The list of prohibited AI applications includes manipulative online practices that produce physical or psychological harms to individuals or exploit their vulnerability on the basis of age or disability; social scoring producing disproportionate or de-contextualised detrimental effects; and biometric identification systems used by law enforcement authorities in public spaces (where their use is not strictly necessary or when the risk of detrimental effects is too high). This is the first time European regulators attempt to define a boundary or limit which should not be crossed when deploying AI within society.
Unlike prohibited AI systems, those AI systems classified as “high-risk” are not forbidden by default but subject to several compliance duties. These duties include a risk management plan, conformity certification, data management plan, human oversight, etc. The list of high-risk AI systems in the AIA includes face recognition; AI used in critical infrastructures, in educational, employment or emergency contexts; in asylum and border contexts; in social welfare or for credit scoring or law enforcement or judicial purposes. The EU Commission has the right to update this list on the basis of the severity and probability of impact of present and future AI systems on fundamental rights.
Finally, a third category of AI systems is considered at “limited risk”. This category includes morally questionable AI applications, such as AI algorithms producing deepfakes (highly realistic fake videos or photos) as well as emotion recognition and biometric categorization systems. Using these systems outside the law enforcement domain would not automatically imply any specific compliance duty, but only relatively vague transparency obligations: i.e., a mere notification to consumers/citizens that an AI system is in place. It should be highlighted that the risk level of an AI system appears to be a conjunction of the type of AI system, its domain of application and its human target. This implies that if an AI system at limited risks shall be used for practices that fall under the unbearable risk list, then it would be prohibited. For example, if AI is utilized to detect emotions among children and thereby manipulate them, then it would be prohibited. Analogously, if such limited-risk AI applications are used in sensitive contexts that fall under the high-risk list, then they would be required to uphold strict compliance duties. This is the case, for example, if AI systems are utilized to profile suspected criminals, workers or students.
The limits of the AI Act on emotion recognitions and biometric categorization
We reckon that the AI Act outlines a robust and innovative framework for AI governance. However, we argue that if fails to regulate practices that attempt to reveal information about a person’s mind as high-risk by default. In our view, this insufficiently strict regulation of AI applications for mental data processing opens room for risk scenarios in which the only safeguard for an individual against having their mental information (e.g., emotions) automatically processed would be a mere compliance duty, such as notifying the individual, but without giving that individual any possibility to opt-out.
Several ethically tainted uses of AI would benefit from this loophole. For example. human rights activists recently revealed that Uyghurs, a Turkic ethnic group native to the Xinjiang Uyghur Autonomous Region in Northwest China, were forced to be subject to emotion recognition software experiments. Further, methodologically ambiguous studies claimed to reveal sensitive characteristic of individuals relating to their mental domain such as their sexual orientation, intelligence, or criminal proclivities from face recognition AI. These practices are currently not considered high risk per se. Neither are other “mind-mining” practices such as social media analyses of emotions aimed at customizing the newsfeed of individual users and covertly influencing their behaviour through microtargeted advertising (except in the very rare cases where it produces physical or psychological harms or exploits vulnerabilities related to age or disability).
To date, the only safeguard proposed in the AI Act against these practices is a mandatory notification to individuals. This creates an additional problem because, as the EDPB argues, the AI Act does not define who these concerned “individuals” would be. Furthermore, mere transparency notices might be fallacious in general for inattentive or uninterested data subjects. Moreover, in this particular case, those transparency duties might prove too limited and generic. This appears clear if we compare it to the transparency duties in other EU legal fields. For example, the EU General Data Protection Regulation (GDPR) requires that anytime automated decision-making is used, the data subjects should be not merely notified, but should receive meaningful information about the logic, the significance and the envisaged effects of the algorithm. Moreover, as we noted in a recent paper, automated emotion analysis might be already considered a ‘high risk’ practice under the GDPR, since it would imply many risk factors that the EDPB asked to consider when determining the level of risk and the need of a DPIA (“innovative technologies”, sensitive information and vulnerable subjects).
The asymmetry between the strict regulation of facial recognition (or AI used for assessing workers, candidates, students, etc.) and the lenient “legitimation” of emotion recognition seems even more puzzling if we consider that, in principle, emotion analysis or biometric categorization might be considered high risk under the definition of risk in Article 7 of the proposal. That Article affirms that the Commission should include in the high-risk list any AI system posing “a risk of adverse impact on fundamental right” that is equivalent to the risk of harm posed by the high-risk AI systems already referred to in AIA. Among the factors to consider for determining the level of risk, Article 7 mentions “the extent to which the use of an AI system … has given rise to significant concerns in relation to the materialisation of adverse impact, … the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons; …the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age”.
Conclusions: call for improvement
To avoid interpretative ambiguities, and partly developing the EDPB’s call for changes in the AIA, we argue that the AIA should explicitly include in the high-risk list the AI systems that rely on mental information such as emotion recognition systems (in any form), digital nudgers, even if at the moment they are not explicitly in the high-risk list.
We contend that the automated analysis of emotions is already considered a “high risk” practice in the EU regulatory landscape, especially in data protection law, since it would fall under at least three high-risk indexes mentioned by the EDPB (namely, using innovative technologies; processing sensitive information; involving vulnerable data subjects, etc.). Therefore, considering these systems high risks would help harmonize the European regulatory landscape and better protect the European citizens.
Gianclaudio Malgieri & Marcello Ienca
Recent Comments