The Shifting Sands of Regulatory Sandboxes for AI

Regulatory sandboxes – a buzzword or a valuable tool for regulators, businesses and society? This blog post aims to shed light on the concept, its nature and specifics, as well as its role for the future development of both technology and legislation.

Yet another buzzword?

June of 2019 was packed with AI-related events. First of all, the High-Level Expert Group on Artificial Intelligence published its long-anticipated Policy and Recommendations for Trustworthy AI. In addition, the 2030 high level industrial roundtable came up with its final report on “A vision for the European Industry until 2030”.

One of the notions that generated the most hype was that of the ‘regulatory sandbox’. Both the European Commission and the Parliament recognised regulatory sandboxing as a highly desirable tool for coping with the regulatory challenges presented by new technologies and, most prominently, by AI.

But what exactly is a regulatory sandbox, one may be wondering. The term originates from a close yet different concept in computer science.

Sandbox vs Regulatory Sandbox

To put it in a layman’s terms, a sandbox in computer science is an isolated environment meant for testing and/or preventing malicious programs from damaging a computer system or critical system resources. In fact, most of us interact with such sandboxes on a daily basis just by using web browsers. Basically, every time a web browser loads a page, this page is opened in a sandbox, limiting what the website can and cannot do and what resources it can use, e.g. in terms of memory, storage etc. The most popular browsers, as a piece of software, are also sandboxes themselves, creating a sort of a ‘’sandbox in a sandbox’’ model, thereby improving a computer system’s overall security. The isolated environment could also be a dedicated drive space on a hard disk or, more often, a virtual machine. One of a sandbox’s key applications is also the monitoring of the system and how it reacts to certain programs. This is precisely the usage of sandbox as a testing tool.

By contrast, a regulatory sandbox is a process and a tool for regulation. It is described as a “laboratory environment” but its key function i.e. to test innovations against the existing regulatory framework, is achieved via a process involving the participating business entities and the regulator.

Similarly to its namesake, it aims to mitigate a risk. Yet,  the nature of the risk is substantively different compared to the risk in a computer system. This requires a different and, more importantly, adaptive approach in constructing different sandboxes, even in relation to the different participants in each of them. Last but not least, a sandbox does exist in the real world, even if it is in the form of a virtual machine operating on a remote server. The regulatory sandbox is a legal fiction and, as such, it is subject to the rules of legal logic.

Playing in the sand

As it was already established the regulatory sandbox is not a new concept. Novel ways of regulation have been discussed and experimented with in relation to FinTech since 2015 when UK became the first country to announce the development of a regulatory sandbox in the context of Project Innovate. Since then quite a few jurisdictions have attempted to duplicate and evolve the experiment. The promising results inspired national and international regulators to look beyond FinTech and to consider adopting regulatory sandboxes in other areas, such as data protection.

In the case of AI, we do not know what specialized regulatory sandboxes are going to look like. Thus, we must examine the generalized model that already exists in the area of FinTech.

The process establishes a safe space for the participants. One can define three categories of the latter – the regulating authority, usually in the face of an executive body, for example the FCA, the participating business entities, and civil society.

What is the objective? Risk management of disruptive technologies is one of the main reasons for the existence of a sandbox. Another one is the dynamic learning process in the sandbox which allows the regulator to be one step ahead and to perceive more accurately, the legislative challenges, as well as to react to those challenges quicker. The business entities shorten the time to market of new products and have stronger guarantees that their products, at least at that stage are compliant with the existing legal requirements. As for consumers, they get access to top-notch technologies with their rights being adequately protected.

What are the play rules? Every sandbox has its own rules depending on what type of innovation it is going to test, what guarantees are needed and what leeway the regulator can and is willing to provide to the businesses. The regulator establishes an entry test with pre-defined criteria, as well as the capacity of the sandbox, the testing parameters and conditions, the evaluation methodology and the exit criteria. It is not an exaggeration to say the whole process is a complex balancing exercise between (commercial) interests and the protection of rights.

Does it really work?

According to the FCA’s Regulatory Sandbox Lessons Learned Report the tool has great potential, but it also creates some challenges. Based on the acquired experience in the field of FinTech and keeping in mind the EU’s ambitions for a regulatory sandbox on AI, some issues are to be highlighted and need to be taken into consideration by the regulators in their future work on this matter. First, the term AI is very broad and naturally there’s a clear need of careful differentiation in order for the sandbox to be functional. Second, the regulatory sandbox needs transparency which must be balanced with classified and commercially sensitive information and trade secrets. Thirdly, the limited number of participants may be insufficient to satisfy the market’s needs and could raise some competition concerns. Furthermore, it is not clear how a national regulator can fully participate in a regulatory sandbox when the area of regulation falls party or entirely under EU’s competences. Last but not least, one of the key characteristics of AI is its capability of learning, meaning an AI technology coming out of the sandbox and labelled as compliant can change pretty rapidly and undermine the value of the sandbox process. In my opinion, regulatory sandboxes are just a small part of a new kind of anticipatory regulation which needs to combine a variety of methods and tools in order to respond to the everchanging nature of a world built on data.

This article gives the views of the author(s), and does not represent the position of CiTiP, nor of the University of Leuven.
ABOUT THE AUTHOR — Katerina Yordanova @katevyordanova

Katerina is a research associate at CiTiP, providing expert analysis in the area of AI and human rights in the digital environment. She has legal degrees from KU Leuven, Sofia University and Cambridge University. Her main interests include ICT law, business and human rights and security.

View all posts by Katerina Yordanova


blog comments powered by Disqus