Artificial Intelligence and its legal challenges. The European Union AI Act.
The main goal of this workshop is to bridge the knowledge gap between AI experts and legislators/policymakers using a bottom-up approach. We aim to provide an interactive platform where legislative experts can directly engage with AI experts, free from institutional pressures and agenda constraints. Machine Learning technology is a novel and complex one and, as a result, even among the specialists in this area, there are still conceptual divisions and lack of clarity, which, inevitably, have spilled over into the non-specialist world of policy making. This is the first time that an IEEE conference of this kind has decided to include a workshop specifically dedicated to the legal and societal aspects of novel robotics applications. In August 2024, the European Union adopted the first-ever piece of legislation on AI: the EU Artificial Intelligence Act. The future implications of this legislation will impact key aspects that are essential for transferring AI technology from research labs to industrial applications, such as Risk Assessments and Liability legislation—both of which are strongly linked to technological robustness.
However, this regulatory effort has exposed significant uncertainties, inflated expectations, and confusion about what this technology truly is and what it can or cannot do. Famously, the EU finds it difficult to come up with a definition of what needs to be regulated, that is, the Machine Learning techniques. As a result, ML definitions changed several times across various papers One of the most confusing issues has been the degree of autonomy these systems may possess, often exaggerated in unrealistic beliefs, such as the expectation that fully automated vehicles like those from Tesla will dominate the roads in the very near future.
This has fuelled the argument that an accelerated legal framework is needed.
Further confusion arises from overestimations of reinforcement learning, as well as misunderstandings between system behavior during the training stage and the inference stage. The frequently used and often misinterpreted term “black box”; has contributed to the perception that AI is an unpredictable technology, more prone to evading human oversight than any technology before it, challenging well-established robotics engineering problems and solutions.
These hyped perceptions in the legal realm have led to the EU attempting to regulate a technology that, according to this view, defies the traditional understanding of “risk” and, consequently, of “ machine fault.” As a result, controversial proposals have emerged, such as granting legal personhood to AI systems and reshuffling the Product Liability EU Directive.
While the EU AI Act represents a more conservative legislative approach than the debates leading up to it, it has not resolved or clarified these issues. Instead, it has maintained them, meaning that future legislation may be based on the same assumptions, perpetuating these concerns—much like the forthcoming Liability Directive. Hence, the necessity of workshops such as this.
Organizer:
Daniela Ionescu – Senior Researcher on AI Robotics, policy and legislation, Extreme Robotics Lab, University of Birmingham, UK.