Artificial Intelligence and its legal challenges. The European Union AI Act.
The main goal of this workshop is to bridge the knowledge gap between AI experts and legislators/policymakers using a bottom-up approach. We aim to provide an interactive platform where legislative experts can directly engage with AI experts, free from institutional pressures and agenda constraints. Machine Learning technology is a novel and complex one and, as a result, even among the specialists in this area, there are still conceptual divisions and lack of clarity, which, inevitably, have spilled over into the non-specialist world of policy making. This is the first time that an IEEE conference of this kind has decided to include a workshop specifically dedicated to the legal and societal aspects of novel robotics applications. In August 2024, the European Union adopted the first-ever piece of legislation on AI: the EU Artificial Intelligence Act. The future implications of this legislation will impact key aspects that are essential for transferring AI technology from research labs to industrial applications, such as Risk Assessments and Liability legislation—both of which are strongly linked to technological robustness.
However, this regulatory effort has exposed significant uncertainties, inflated expectations, and confusion about what this technology truly is and what it can or cannot do. Famously, the EU finds it difficult to come up with a definition of what needs to be regulated, that is, the Machine Learning techniques. As a result, ML definitions changed several times across various papers One of the most confusing issues has been the degree of autonomy these systems may possess, often exaggerated in unrealistic beliefs, such as the expectation that fully automated vehicles like those from Tesla will dominate the roads in the very near future. This has fuelled the argument that an accelerated legal framework is needed. Further confusion arises from overestimations of reinforcement learning, as well as misunderstandings between system behavior during the training stage and the inference stage. The frequently used and often misinterpreted term “black box”; has contributed to the perception that AI is an unpredictable technology, more prone to evading human oversight than any technology before it, challenging well-established robotics engineering problems and solutions. These hyped perceptions in the legal realm have led to the EU attempting to regulate a technology that, according to this view, defies the traditional understanding of “risk” and, consequently, of “ machine fault.” As a result, controversial proposals have emerged, such as granting legal personhood to AI systems and reshuffling the Product Liability EU Directive. While the EU AI Act represents a more conservative legislative approach than the debates leading up to it, it has not resolved or clarified these issues. Instead, it has maintained them, meaning that future legislation may be based on the same assumptions, perpetuating these concerns—much like the forthcoming Liability Directive. Hence, the necessity of workshops such as this.
Organizer:
Daniela Ionescu – Senior Researcher on AI Robotics, policy and legislation, Extreme Robotics Lab, University of Birmingham, UK.
Program 17 April
14:30 – 14:35 – Introduction
14:35 – 14:55 Marta Giuca – Criminal Liability for damages caused by AI systems in the European Union AI Act (AIA) view.
14:55 -15:15 Riccardo Pivetti – The Role of AI in Helping the Judiciary in Public Safety, Legal Protection, Criminal Process.
15:15 – 15:35 Andrea Bertolini – The European Union AI Act and Intelligent Robotics. The risk management of high-risk robotics.
15:35 – 15:55 Daniela Ionescu – Controversies about “learning” and “autonomy” in Machine Learning. The case of nuclear decommissioning and dismantling EV batteries.
Coffee break 16:00 -16:30
16:30 -18:00 Round table discussion and Q/A from the public
Registration to Workshops (required – deadline 13th April)
Speaker’s Biographies
Dr. Andra Bertolini is an Associate Professor of Private Law at the Dirpolis Institute, Scuola Superiore Sant’Anna, and an adjunct professor of private law at the University of Pisa. He is also the director of the Centre of Excellence on the Regulation of Robotics and AI (EURA), funded by the European Commission. His research covers a wide range of topics, including private law (contracts, torts, law of obligations), the regulation of robotics and AI, and technoethics, with a focus on alternative liability models, product safety regulation, certification, insurance, and risk management; as well as human-machine interaction, user manipulation and deception. Dr. Bertollini regularly consults with national, international, and European policymakers in his areas of expertise.
Dr. Marta Giuca has extensive experience, both academic and “on the field,” in the area of Artificial Intelligence and criminal liability. Dr. Giuca has conducted a series of in-depth research projects on AI and human criminal liability at research labs, including the University of Catania, CY Cergy Paris Université, the CESDIP laboratory, and the BCNMedtech laboratory at Pompeu Fabra University (Barcelona). Marta was also a visiting researcher at the prestigious Max Planck Institute for the Study of Crime, Security, and Law in Freiburg, and at the University Paris-1 Panthéon-Sorbonne. Currently, Marta is a trainee judge at the Court of Catania, Italy, specializing in criminal law and advanced technologies (AI).
Dr. Daniela Ionescu is a senior research fellow at the Extreme Robotics Lab at the University of Birmingham, UK. Dr. Ionescu has conducted extensive research on the regulation of machine learning systems applied to robotics in extreme environments, particularly in nuclear decommissioning, underwater activities, and the dismantling of electric car batteries. Her research has been part of several large consortium/hub grants funded by the UK Research Institute – National Centre for Nuclear Robotics, the EU Horizon 2020 RoMAnS project, and ongoing EU-funded initiatives. Dr. Ionescu is also a member of the Expert Group on the Application of Robotics and Remote Systems in the Nuclear Back-End of the OECD.
Dr. Riccardo Pivetti is the President of the First Criminal Section of the Court of Catania, Italy. Considering his vast experience in criminal law and cybernetics, he can suggest to engineers the best methodologies to build AI systems that can support the activities of police in investigating crimes, as well as the judge in evaluating the evidence in the criminal proceeding. In this context, Dr. Pivetti exemplifies how high-risk AI systems, as defined by the European Union’s AI Act, could impact the field of criminal law.