Algorithm and individual freedom

Algorithm and individual freedom

What once seemed confined to the realms of mythology or science fiction now materializes through autonomous vehicles, advanced diagnostic systems, and decision-making algorithms. Artificial Intelligence (AI) acts as an "umbrella term," encompassing various computational techniques to enhance machines' ability to perform intelligent tasks. Central to this process is a vast data flow that, for some, develops autonomy and thinking capacity, raising concerns about managing such informational and computational streams. The relationship between AI and social and legal systems now unfolds with its intricate implications and adaptive challenges. Presently, the topic garners significant interest due to its impact on various facets of social and economic life, raising fundamental ethical and anthropological questions regarding the correctness of delegating decisions affecting individual and collective freedoms in diverse fields such as law, medicine, and aerospace, posed by machines. The legal interest in AI is inherent, evolving in response to societal forces, as AI becomes an essential element in people's lives and social relationships. The need to make sense of rapidly changing reality and human experiences is affirmed by the European Parliament's Resolution of February 16, 2017, outlining Civil Law Rules on Robotics. According to this resolution, the autonomy of a robot can be defined as its ability to make and implement decisions in the external world independently of external control or influence. The level of autonomy is purely technological and depends on the complexity with which a robot's interaction with the environment is designed. In cases where a robot can make autonomous decisions, traditional norms are deemed insufficient to activate responsibility for damages caused by a robot, as they wouldn't allow determining the responsible party for compensation or demanding such repair. Machines and AI systems can cause physical, moral, and economic harm with unpredictable and programmable behaviors, extending beyond issues related to driverless cars to encompass military AI use and the spread of autonomous weapons. This poses challenges to international humanitarian law, threatening principles like the distinction between military and civilian targets and the respect for human dignity. Technology, until recently considered a tool for humans, has suddenly become a "subject" where human freedom is vested, attributing a kind of "agency" to the impactful presence and action of sensory, digital, and artificial networks in the world. The question arises whether it's possible to assign legal personality to these systems, considering their growing autonomy and the ability to make choices with moral implications. The Compas case is emblematic, where judicial and administrative decisions were made using algorithms. COMPAS is an algorithm produced by Equivant, a private company that assesses an individual's social danger and risk of recidivism based on statistical data, responses to a questionnaire by the defendant, and other variables protected by intellectual property by Equivant. In 2013, American citizen Eric Loomis was sentenced to 6 years in prison and 5 years of probation for not stopping when ordered by a police officer after being recognized as the driver of a stolen vehicle. The La Crosse County Court justified this substantial penalty by stating that the individual was recognized as "high-risk for the community" by Compas. Loomis appealed the sentence, which was deferred to the Wisconsin Supreme Court, claiming a violation of the constitutional due process clause since, due to the private nature of the software used for the sentence, he couldn't obtain access to the risk assessment source code. Loomis's argument was rejected, and the Supreme Court upheld the correctness of the initial sentence, asserting that risk scores accredited by the software are considered alongside other independent factors and that such software is used as a tool in courts that can choose whether and which data to accept. Faced with this case, it becomes evident that it's not always possible to have complete knowledge or understanding of the algorithm deciding individual freedom. For this reason, it's necessary that these tools are not used exclusively as autonomous entities but are always subjected to human reasoning. Social asymmetry in the impact of AI is clear, with systems that can perpetuate discriminatory models, especially against minority groups. Ethical programming of algorithmic models, along with ethical training for programmers and developers, becomes crucial to prevent indirect discriminations. Today, an increasing number of decisions influencing human liberties are made by algorithms. This fact raises many questions about the transparency of such tools, the legal and ethical framework for algorithmic decision-making, and the social and cognitive impacts of such algorithmic automation.

  • #Technologies
  • #Technology
Sources:

Simoncini, A. (2019). L’algoritmo incostituzionale: intelligenza artificiale e il futuro delle libertà. BioLaw Journal - Rivista Di BioDiritto, 15(1), 63–89. https://doi.org/10.15168/2284-4503-352

S. CASSESE, Il diritto nello specchio di Sofocle, in Il Corriere della Sera, 18 maggio 2018, https://www.corriere.it/cultura/18_maggio_18/cartabia-violante-il-mulino-saggio-cassese-c1288514-5ab0- 11e8-be88-f6b7fbf45ecc.shtml (ultima consultazione 4/02/2019).

P8TA (2017)0051 Norme di diritto civile sulla robotica, Risoluzione del Parlamento europeo del 16 febbraio 2017 recante raccomandazioni alla Commissione concernenti norme di diritto civile sulla robotica (2015/2103(INL).

D'Aloia, Antonio. "Il diritto verso “il mondo nuovo”. Le sfide dell’Intelligenza Artificiale." Biolaw Journal-Rivista di Biodiritto 1 (2019): 3-31.