28th February 2019, Leuven
Recent advancements in data science have led to the development of cyber-physical spaces where humans and machines evolve together. Artificial Intelligence (‘AI’) is increasingly being ‘delivered’ through an ever-growing number of consumer and industrial platforms (e.g. transport and healthcare platforms). The ‘platformisation’ of AI raises multilevel legal and ethical challenges regarding the platform itself, its customers, contributors and end-users. In the face of growing legal certainty for all actors involved, the AI Law & Ethics Conference focuses on these challenges from the assumption that the emerging global AI platforms should be ‘interoperable’ not only at technical but even more so at legal and ethical level.
From self-driving cars, unmanned aircraft and ships, to connected home appliances and service robots, to personalised healthcare and precision medicine, the emergence of more and complex autonomous systems has given rise to a number of hard, sensitive and perhaps even existential questions. While AI has existed for several decades already, recent advancements in data science driven by the widespread availability of data and increasing computational power, have led to the development of cyber-physical spaces where humans and machines interact and evolve together.
Data allows for increased personalisation of the standards against which human behaviour is evaluated, oftentimes in peril of abolishing the legal protections dealing with the power asymmetries between individuals and companies. Furthermore, AI is increasingly being ‘delivered’ through an ever-growing number of consumer and industrial platforms (e.g., transport and healthcare platforms) developed and supported by stakeholders acting at different levels and in different capacities. These platforms go far beyond simple aggregation of data to provide access to advanced AI capabilities linking platform users’ needs with tailor-made solutions. In times where every aspect of life can be digitised and therefore become subject to ‘optimisation’, the ‘platformisation’ of AI raises multilevel legal and ethical challenges regarding the platform itself, its customers, contributors and end-users.
Platforms can leverage AI and data, which together appear to form the foundations of platform growing dominance. This raises economic, ethical and competition concerns which the law struggles to rightly account for, such as digital dominance and the various forms of harm that result from it. Are existing legal instruments fit for the purpose of regulating this new form of digital governance? The convergence of competition law, data protection and consumer law is sometimes seen as the way to protect the weaker parties from harm caused by platforms. While presenting inherent legal challenges, the efficiency of this trend still needs to be assessed. Against this background, there may be a need to search for new legal instruments; the spectrum is wide, from market-based incentivization to stricter forms of regulatory oversight.
The issues raised by the ‘platformisation’ of AI are exacerbated where platforms could be supported, managed or supervised by states and other governmental actors, such as international organisations. In such multi-stakeholder environments with distributed agency and cascade delegation, the governance and allocation of responsibility and liability is becoming increasingly difficult. For example, AI platforms could form part of critical infrastructure with direct implications on the manner in which a State would discharge its obligations and the international responsibility to protect individuals.
The clear delineation of the different actors’ obligations in such platforms is even more challenging in light of the uncertain legal status of data, machine learning models and other types of ‘smart property’. The convergence of public and private interests in these (global) ‘one-stop-shop’ platforms of the future reinforces the question of whether AI should be treated as a ‘global public good’ (e.g., such as security, climate change mitigation, or global public health) or as part of the global commons. If so, could we seek ‘legal interoperability’ with the regimes applicable to other global commons, such as the polar regions, the atmosphere or the outer space? Finally, the search for an adequate governance model has become all the more challenging in light of the co-evolution of humans and machines triggering new ethical concerns resonating with the basic question of the content of (international) morality.
The AI Law & Ethics Conference, held in Leuven on 28 Feb 2019, focuses on these challenges from the assumption that the emerging global AI platforms should be ‘interoperable’ not only at technical but even more so at legal and ethical level. It aims to explore whether this goal could be attained through a commonly agreed (global) governance model, how to build trust in AI in an increasingly uncertain technological and political environment, and how to align the development and co-evolution of these platform with the sustainable development goals of human society. By bringing the knowledge and perspectives from academia, European institutions and businesses, the conference takes up the challenge and attempts to provide food for future thought and actions. See the agenda and the list of confirmed speakers here.
Register here (limited seats available)
Comments