The second AI4People’s Forum Meeting 2019 took place on July 10th in Brussels and was presented the initial draft and closing the 1st phase of the drafting of the “Report on Good AI Governance: Principles, Priorities, and Models of Smart Coordination”. Members of the AI4People’s Forum, of the European Parliament and European Commission discussed the Report and gived their contribute for the further phase of drafting.
Following the AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, the activities in 2019 focused on the “Governance of AI”. The Recommendations of this document and its 20 Action Points (AP) are the starting point upon which the 2019 activities will be built on; most of the issues on how to assess, to develop and to support a Good AI Society entail complex matters of governance.
Accordingly, the first draft of new AI4People’s Report on Good AI Governance will be divided into three parts.
Part one will focus the principles of AI. These principles were first illustrated in our 2018 paper and have since been adopted – also but not only – by the AI HLEG’s Ethics Guidelines for Trustworthy AI from April 2019. In stressing the link between our 2018 and 2019 work in the background section of this document, the aim is to highlight a substantial convergence on some ethical issues of AI and their corresponding guidelines, much as a substantial part of today’s law that is already applicable to AI, e.g.,tortious liability of all EU Member States.
The second part of the document will focus on the priorities of today’s AI governance. The ethical principles and recommendations of our 2018 paper are prioritized according to that which can be deemed as good, right, or lawful, and moreover, that which can be done now; (whereas a second kind of priority concerns that which we reasonably think is good, but which will take time to implement).The document thus proposes three different types of priority. These regard (i) forms of engagement; (ii) no-regrets actions; and (iii) coordination mechanisms.
Part three of the document will focus on models of AI governance and corresponding forms of legal regulation. The complexity of today’s legal and moral issues on AI regulation recommends specific forms of governance that are neither bottom-up, nor top-down. In the EU legal framework, this middle-out layer of governance –i.e. between the top-down and bottom-up approaches– is mostly associated with forms of co-regulation, as defined by Recital 44 of the 2010 AVMS Directive and Article 5(2) of the GDPR. The document intends to show why neither co-regulative models of AI governance nor forms of self-regulation nor its variants, e.g., “monitored self-regulation”, are good enough to tackle the normative challenges of AI. Rather, the bar is set between models of self-regulation and co-regulation, since the approach considers both the existence and limits of current regulatory frameworks, as examined above with the principles of AI.