In the European regulatory framework, considerations related to data processing on the web have resulted in a comprehensive set of rules governing the automated processing of personal data. All these laws and their amendments have been superseded by the new General Data Protection Regulation (GDPR), which has been in effect since May 2018. It specifically addresses decisions made through the automated processing (using algorithms) of personal data concerning individuals to whom such data pertains.
The regulatory framework within the context of algorithm usage can be distilled into three principles: the principle of transparency, non-exclusivity, and non-discrimination.
- Principle of transparency and comprehensibility:
within the European and Italian regulatory framework for the protection of personal data, every individual has the right to be aware of the existence of automated decision-making processes that involve them. This right extends to decisions made by both private and public entities. However, mere awareness of the existence of an algorithm is not sufficient; there is a need for decisions to be comprehensible. Complexity arises when predictive algorithms based on machine learning produce inference criteria that are not understandable to the programmers themselves. - Principle of non-exclusivity and human involvement:
according to Article 22 of the GDPR, "the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or similarly significantly affects him or her. European law adheres to the HITL "human-in-the-loop" principle, requiring human involvement in the decision-making process to control, validate, or reject the automated decision. There is skepticism about the intrinsic effectiveness of the non-exclusivity principle, which theoretically prohibits decisions based solely on automated data processing. However, exceptions are broad, as once algorithms are integrated into human decision-making processes, they tend to prevail for practical reasons rather than scientific value or technical reliability. Both in the United States and Europe, the use of evidence-based risk assessment tools has been strongly promoted, aiming to enhance efficiency and reliability, as the use of these tools lends an air of authenticity, scientificity, and credibility. - Principle of algorithmic non-discrimination:
this principle is implicit in Regulation 679/2016 of Euro-national law, emphasizing the importance of avoiding discriminatory effects on individuals based on race, ethnic origin, political opinions, religion, personal beliefs, union membership, genetic status, health status, or sexual orientation during automated data processing, especially in profiling and predictive algorithms. The principle highlights the possibility that an algorithm, while objectively considered reliable, may carry the risk of discrimination when based on initially discriminatory data.
The dialogue between law and technology is still limited. Simoncini proposes the adoption of a doctrine of "constitutional precaution" requiring the anticipatory protection of constitutional goods such as freedom and respect for the rule of law before the production of technological applications. Intervening after the creation and dissemination of algorithms is too late, and suggests proactive privacy protection. It is necessary to intervene during the training of scientists and technologists to integrate constitutional principles and convey fundamental principles, such as the protection of personal data and algorithm comprehensibility. Sustainability and constitutional precaution are crucial requirements concerning AI developments, ensuring the long-term preservation of the basic conditions of human life and nature, with an awareness of the potential challenges that may accompany the evolution of this technology.
On Friday, December 8, 2023, after months of intense negotiations and three days of "marathon" talks, the European Parliament and the Council reached a political agreement on the European Union Artificial Intelligence Act ("EU AI Act"). Commission President Ursula von der Leyen claims "global primacy," positioning the EU as a pioneer in AI regulation, being the "first continent to define clear rules for the use of AI." With this significant legislation, the EU aims to create a comprehensive legal framework for regulating artificial intelligence systems across the EU, with the goal of ensuring that such systems are "safe" and "respect fundamental rights and values of the EU" as well as promoting investments and innovation in AI in Europe. Once the consolidated text is finalized in the coming weeks, most provisions of the EU AI Act will come into effect two years later.
The decree aims to regulate Artificial Intelligence (AI) based on its capacity to cause harm to society, adopting a "risk-based" approach where higher risk warrants stricter rules. This legislative proposal, the first of its kind globally, could set a global standard for AI regulation in other jurisdictions, similar to what GDPR has done, promoting the European approach to global technological regulation. Key elements of the decree, compared to the Commission's initial proposal, include:
1. Rules for general-purpose AI models with high impact and high-risk AI systems, with a risk-based classification.
2. A revised governance with enforcement powers at the EU level.
3. Expanded prohibitions, with the possibility of using remote biometric identification by law enforcement authorities in public spaces, subject to safeguards.
4. Enhanced protection of rights, with an obligation for operators of high-risk AI systems to conduct a fundamental rights impact assessment before deploying an AI system.
Regarding definitions and scope, the decree aligns with the approach proposed by the OECD, excluding sectors outside the EU legal framework and not influencing national competencies in security and defense. High-risk classifications for AI systems and prohibited practices are defined to ensure effective protection. Moreover, exceptions are introduced for military use and for research and innovation purposes.
The decree establishes a governance system that includes an AI Office at the Commission, responsible for overseeing advanced AI models. An independent scientific panel and a council of representatives from member states are planned to coordinate and advise on AI regulation, along with proportionate sanctions for violations, with milder limitations for SMEs.
Additional measures are outlined to ensure transparency and protection of fundamental rights, including an analysis of the impact on fundamental rights before introducing high-risk AI systems. Furthermore, the decree promotes innovation through the introduction of regulatory sandboxes for the development and testing of new AI systems. The decree will come into effect two years after its adoption, and the formal finalization and adoption process will be completed after legal and linguistic review.