The development and deployment of AI technologies represents an excellent opportunity for humanity, where their positive effects are already visible in the fields of transport, health, finance, law and other sectors. Autonomous vehicles and predicting the spread of COVID-19 represent only a fraction of AI`s potential. AI is already automating various intellectual tasks that were traditionally thought to be performed solely by human lawyers, such as predicting court outcomes, legal drafting, contract review, case synthesis, and legal research. For some, it was a surprise to read a McKinsey report that estimated that 23% of the work done by lawyers could already be automated by existing technology. At the interface between AI and law, there is a wide range of effects. While we accept projects that are largely located at this interface, the threshold would ideally be that of “legal disruption” for projects to be carried out within this group. It is this potential for disruption of legal principles, processes and procedures that is at the centre of the assessment and investigation in this group. Legal disruptions are the filter through which the topics discussed in this group flee: artificial intelligences or their manifestations, capable of fundamentally suppressing legal assumptions or systematically distorting the functioning of the regulatory system, are mainly taken into account. Therefore, artificial intelligences and their manifestations must pose structural or systemic challenges to governance in order to be included as a project in this group. This is an inevitably high threshold, but in order to test whether artificial intelligence or its effect is the model, we will of course also address topics that could ultimately fail. Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law is designed to challenge lawyers to the practical impact that new technologies will have on the delivery of legal services and thinking about legal issues in order to manage their digital transformation.
By inviting thought leaders from around the world and in a variety of disciplines ranging from privacy, contract law and crime to governance and politics, this book goes beyond abstract and general philosophical observations on issues affecting practitioners. This practical approach has spawned a wide range of global perspectives that are refreshing and timely for increasingly global problems. The objective of the AI LeD research group is to examine and address legal, regulatory, governance and policy issues arising from artificial intelligence (AI) as a technology and the continued use and application of these technologies in various areas of human activities. Therefore, there are obvious overlaps with the faculty`s focus on digitization, but the goal of this research group is to go further to address the next generation of transformative legal issues arising from AI and other emerging technologies. Given the focus on legal disruption, this group constantly pursues a dynamic goal: when legal and policy responses to challenges are overcome or otherwise solved by artificial intelligence, these problems lose their disruptive effect and escape the reach of this group (much like AI is a moving target, with problems such as failures, vision or translation – which were once considered “benchmarks of intelligence” – and as “simple calculation” as soon as they are conquered by computers). What loses controversy also loses interest for us. But the perspective offered by legal disruption offers a mix of horizon analysis for the next generation of challenges and a level of foresight on future issues that we will be able to prepare legal and policy responses. Therefore, this group`s perspective celebrates the unknown and incomplete to formulate more robust and resilient regulatory models in response to these brilliant technologies.
This is perhaps best illustrated by our collaborative project with the Petrie Flom Center at Harvard Law School on Black Box Precision Medicine (PMAIL). Black box precision medicine is an exciting new frontier in healthcare diagnostics that harnesses the power of big data and AI. In black box medicine, machine learning algorithms and artificial intelligence examine newly available treasure troves of health data, including genomic sequences, patient clinical care records, and diagnostic test results, in order to make predictions and recommendations about care. An algorithm can be a “black box,” either because it is based on unknown machine learning techniques, or because the relationships it establishes are too complex to be understood explicitly. Centre for Advanced Studies in Biomedical Innovation Law (CeBIL)AI-LeD also cooperates with CeBIL due to the narrow focus of our research areas. It focuses on the legal and ethical issues raised by black box precision medicine, as described by Timo Minssen, Director of CeBIL: The aim of the Artificial Intelligence and Legal Disruption (AI LeD) research group is to study and address the legal, regulatory, governance and policy issues that arise from artificial intelligence (AI) as a technology and the ongoing use and application of these technologies in different fields of human activity. There is simply a convincing amount of legal research to be done in this area. AI is likely a disruptive and potentially disruptive force for the law. Lawyers need to start thinking about how technology should meet legal requirements, but also what legal foundations need to be reconsidered in light of what AI reveals in the organization of society, right down to individual rights. To the extent that AI offers us an alternative perspective for examining legal principles and processes, it offers a valuable opportunity not only to support the existing legal constellation, but also an opportunity to rethink and improve existing legal systems. This opportunity for improvement is crucial here: what AI systems call anomalies or produce results often reflect human and social biases.
Instead of demanding that we corrigate AI systems to prevent such things in the future, legal scholars should use this rare finding to address the underlying point of friction or controversy. He started working at CiTiP in October 2019 as a postdoctoral researcher on the legal aspects of AI and as a senior researcher at the Flemish Knowledge Centre for Data and Society (KDS). From November 2020, he will work at CITIP as a research expert in tort law and AI. It is still associated with KDS. He is also involved in several projects at CiTiP. He has published numerous articles in scientific journals and books and is the editor of “Autonome motorvoertuigen: een multidisciplinair onderzoek naar de maatschappelijke impact” (Vanden Broele, 2020), “Artificiële intelligentie en Maatschappij” (Gompel & Svacina, 2021) and “Artificial Intelligence and Law” (Intersentia, 2021). He is a member Leuven.AI as well as various other academic institutions (e.g. ICAV, CVGR,…). Jan De Bruyne has been a lecturer for electronic contracting under the LLM IT & IP Act since 2019. He is also a regular speaker at/organizer of conferences and seminars. The interface between AI and law broadly comprises three thematic groups: Dr Jan De Bruyne obtained a Master`s degree in Political Science from Ghent University (2008) and a Master of Laws (2012) from the same university. Since October 2012, he has been an assistant in law and private law at the Faculty of Law and Criminology of the University of Ghent.
He successfully defended his Doctorate. in September 2018 on a topic dealing with the liability of third-party certifiers. His doctoral dissertation was published by Kluwer Law International. During his research, he became interested in liability for damage caused by AI systems. Jan De Bruyne was a postdoctoral researcher at the Faculty of Law and Criminology of Ghent University from October 2018 to October 2020, working on robots and tort law. Phone: (45) 35 33 76 96E-mail: hin-yan.liu@jur.ku.dk. Faculty of LawUniversity of CopenhagenSüd Campus, Building: 6A.4.16Karen Blixens Plads 16DK-2300 Copenhagen S. Jan De Bruyne has been a Visiting Fellow at the TC Beirne School of Law (Brisbane, Queensland), a Van Calker Fellow at the Swiss Institute of Comparative Law and a Visiting Scholar at the Institute of European and Comparative Law at the University of Oxford and the Center for European Legal Studies at the University of Cambridge.