The EU Artificial Intelligence Act
The EU Artificial Intelligence Act
The European Commission Proposes New AI Regulations April 21st 2021
Since the EU Commissions has decided to release a proposal for a legal framework on AI I thought it would be worth a read. This article, that you are reading, can be considered my personal notes on the regulations and not an official statement from any organisation I am part of.
One can hear several broad statements in different media:
I think it is great that this new proposal is being discussed in the media, yet I wanted to explore the proposed regulation myself. Additionally, I will try to explain a few basic concepts mentioned and not explained in the proposal itself relating to the EU.
This article will give you a more detailed understanding of the outline of the proposal and where you could start.
I was not be able to explore the proposal in its entirety within this one article, so you can view this as a series of articles, I will continue to explore the Articles of the EU Artificial Intelligence Act. This is a 107 page policy document and requires more to unravel.
It is relatively easy to find the new proposal released the 21st of April 2021, and I appreciate reading other opinions on this text, so if you have any thoughts please share them as a comment on this article.
This is the page where you can download the proposal:
A direct link to the proposal if you do not feel like searching online:
The following are broad notes and the numbering of the sections does not correspond to the sections in the proposal.
1. Harmonised rules on AI and amending legislation
The document begins with a subtitle that may allude to the overall purpose of the document. It speaks to harmonised rules and amendments of legislative acts.
So what is harmonised rules?
“An objective of the European Union to achieve uniformity in laws of member states is to facilitate free trade and protect citizens. Harmonisation is a process of ascertaining the admitted limits of international unification but does not necessarily amount to a vision of total uniformity.”
That is, the EU has a wide range of members and there is an aim to be able to have some unity without total rule — that is, it is open to some extent to national interpretation. This aligns well with larger previous data regulations such as the General Data Protection Regulations (GDPR), where member states (and associated members, such as Norway) have some wiggle room to make their own decisions.
2. Explanatory Memorandum?
This is a section explaining why the proposal was made and how it was prepared.
“The purpose of the explanatory memorandum is to explain the reasons for, and the context of, the Commission’s proposal drawing on the different stages of the preparatory process.” Tool #38 in the regulation toolbox (yes, there is such a thing).
This section should not exceed 15 pages, although in complex cases more can be justified. In this case it is 16 pages, so I guess this is a slightly complex case?
2.1. Reasons and objectives
The first reason is the potential ‘societal benefits’ AI can bring. Second mention is of the ‘competitive advantages’.
“By improving prediction, optimising operations and resource allocation, and personalising service delivery, the use of artificial intelligence can support socially and environmentally beneficial outcomes and provide key competitive advantages to companies and the European economy.” (p. 1)
While at the same time it is stressed: “…the same elements and techniques that power the socio-economic benefits of AI can also bring about new risks or negative consequences for individuals or the society.”
It is stated that EU strives for a ‘balanced approach’ by aiming for technological leadership and that technology works according to: “…Union values, fundamental rights and principles”.
Then, they refer to previous statements by the EU, the commitment by President von Der Layen in her political guidelines for the 2019–2024 Commission. They reference the White Paper on AI — A European approach:
This previous text outlined a twin objective:
- Promoting uptake of AI
- Addressing risks with the use of AI.
It also set out policy options for the uptake of AI. They refer back to this twin objective and that their focus is on the second regarding risk:
“This proposal aims to implement the second objective for the development of an ecosystem of trust by proposing a legal framework for trustworthy AI.” (p. 1)
They underline that rules should be human centric. This was one of the previous conclusion from the High-Level Expert Group on Artificial Intelligence (AI HLEG), in the document Ethics Guidelines for Trustworthy Artificial Intelligence (p. 10):
“The common foundation that unites these rights can be understood as rooted in respect for human dignity — thereby reflecting what we describe as a “human-centric approach” in which the human being enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields.”
Further driving home this point the proposal for AI regulation argues:
“AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.” (p. 1)
May I remind you, we are still on the first page.
A lot to unravel in this document.
This document responds to explicit requests from the: “…European Parliament (EP) and European Council […] to ensure a well-functioning internal market for artificial intelligence (‘AI systems’)”
So, how does this work?
This picture might not explain a whole lot, yet it gives an understanding that there is a specific relationship between these three institutions.
Remember, this proposal document is from the European Commission.
That entity at the top there of the illustration proposing legislation.
The text goes on to describe the previous process that has happened in the council and parliament relevant to the reason for this proposal. Especially previous resolutions on AI and draft reports:
- Ethical aspects on AI, 2020/2012(INL).
- Civil liability regime for AI, 2020/2014(INL).
- Intellectual property rights, 2020/2015(INI).
- (draft report) criminal law, 2020/2016(INI).
- (draft report) education, culture and the audiovisual sector, 2020/2017(INI).
Legislative initiative procedure (INL) as described on the European Parliament website: “The Commission has the legislative initiative. However, under the Treaty of Maastricht enhanced by the Lisbon Treaty, the European Parliament has a right of legislative initiative that allows it to ask the Commission to submit a proposal.”
Own-initiative procedure (INI) as described by EU monitor:
“By means of the own-initiative report the European Parliament request the European Commission to put forward a legislative proposal on a certain issue. An own-initiative report is drawn-up according to Parliament’s own procedures. It is not regarded as one of the formal decision-making procedures of the European Union, but it is seen as a significant precursor to legislative procedures being initiated.”
Furthermore INI as described by the European Parliament: “In the areas where the treaties give the European Parliament the right of initiative, its committees may draw up a report on a subject within its remit and present a motion for a resolution to Parliament. They must request authorisation from the Conference of Presidents before drawing up a report.”
2.2. Specific objectives
The proposal has stated specific objectives (p.3):
- “ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
- ensure legal certainty to facilitate investment and innovation in AI;
- enhance governance and effective enforcement of existing law on fundamental
- rights and safety requirements applicable to AI systems;
- facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.”
They claim to take a ‘horizontal’ regulatory approach. That means, the approach is not sector-specific.
In doing so, they argue that they have made a framework with flexible mechanisms, so that it can be dynamically adapted. Still, it has a risk methodology to define ‘high-risk’ AI systems.
The governance system is Member States level and build on existing structures, but it is important to note two aspects:
- Establishment of a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.
- “Additional measures are also proposed to support innovation, in particular through AI regulatory sandboxes and other measures to reduce the regulatory burden and to support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups.”
2.3. Consistency with existing policy provisions
It is stated this proposal complements GDPR:
“…with a set of harmonised rules applicable to the design, development and use of certain high-risk AI systems and restrictions on certain uses of remote biometric identification systems.”
This proposal is also made to minimise algorithmic discrimination, especially in regards to the quality of the data sets.
With this comes obligations to ensure that throughout the AI systems’ lifecycle there will be:
- Risk management documentation.
- Human oversight.
A proposal for a Machinery Regulation was adopted the same day and the New Legislative Framework (NLF) legislation (2008) (e.g. machinery, medical devices, toys) is important to consider for AI products as well.
A very specific statement is made regarding regulated credit institutions.
Two very specific comments are made here regarding credit institutions and certain large-scale IT systems.
Firstly: “As regards AI systems provided or used by regulated credit institutions, the authorities responsible for the supervision of the Union’s financial services legislation should be designated as competent authorities for supervising the requirements in this proposal to ensure a coherent enforcement of the obligations under this proposal and the Union’s financial services legislation where AI systems are to some extent implicitly regulated in relation to the internal governance system of credit institutions.“
Secondly: “In relation to AI systems that are components of large-scale IT systems in the Area of Freedom, Security and Justice managed by the European Union Agency for the Operational Management of Large-Scale IT Systems (eu-LISA), the proposal will not apply to those AI systems that have been placed on the market or put into service before one year has elapsed from the date of application of this Regulation, unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned.”
2.4. Consistency with other Union policies
A few existing policy initiatives is mentioned, I mention a few:
- Shaping Europe’s digital future.
- 2030 Digital Compass: the European way for the Digital Decade.
- Data Governance Act.
- The Open Data Directive.
- the EU strategy for data.
As such, one could easily see why this is clearly more complicated than the 107 pages long Artificial Intelligence Act.
However, central to this proposal is that it seems to be a follow-up, so to speak, on the EU’s large international presence as an actor in global data policy:
“The proposal also strengthens significantly the Union’s role to help shape global norms and standards and promote trustworthy AI that is consistent with Union values and interests.”
2.5. Legal basis, subsidiarity and proportionality
The first legal basis for the proposal is Article 114 of the Treaty on the Functioning of the European Union (TFEU) that relates to the functioning of the internal market.
“This proposal constitutes a core part of the EU digital single market strategy.”
The stated reason for this is to avoid (i) fragmentation and (ii) diminishment of legal certainty. That is why they are harmonising legislation.
“…the proposal defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards.”
It also addresses what to do when certain AI systems are operational in the market.
This proposal also contains specific rules on the protection of individuals in regard to the processing of personal data: “…notably restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement…”
The EU specificaly do not want a patchwork of laws impossible to navigate at the local level. They make this argument to address the EU’s organising principle of subsidiarity.
What is subsidiarity? It is:
“…the principle that a central authority should have a subsidiary function, performing only those tasks which cannot be performed at a more local level.”
If AI only can be applied in a set way within certain Member States it could be problematic, and there could be larger issues smaller states cannot deal with easily. They argue that:
“Only common action at Union level can also protect the Union’s digital sovereignty and leverage its tools and regulatory powers to shape global rules and standards.”
It is important to stress that this proposal:
“…imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety.”
As such, this definition becomes important. Although it may be harder to comply to regulations the EU argues that there are proportionate economic and reputation benefits for operators.
2.6. Evaluations, stakeholder consultations and impact assessments
The following was done to consult stakeholders:
- An online public consultation was launched on 19th of February 2020–14th of June 2020 (along with the publication of the White Paper on Artificial Intelligence). This received 1215 contributions (more details about these can be found on p. 7–8).
Several points were made the stakeholders. One was the fear of overregulation, another was the importance of clear definitions.
“Stakeholders also highlighted that besides the clarification of the term of AI, it is important to define ‘risk’, ‘high-risk’, ‘low-risk’, ‘remote biometric identification’ and ‘harm’”
According to the proposal, a risk-based framework was considered a better option than ‘blanket regulation’. Risks and threats should be:
“…based on a sector-by-sector and case-by-case approach.”
2.7. Use of expertise
The proposal refers back to the High-Level Expert Group on AI (HLEG) set up in 2018 with 52 experts to advise the Commission on the implementation of the Commission’s Strategy on Artificial Intelligence. 2019 the Commission supported the HLEG ethics guidelines for Trustworthy AI (taking account of 500 submissions from stakeholders).
- The Assessment List for Trustworthy Artificial Intelligence (ALTAI) made those requirements operational in a piloting process with over 350 organisations.
- The AI Alliance was formed as a platform for approximately 4000 stakeholders.
After this, different policy options were considered:
- “Option 1: EU legislative instrument setting up a voluntary labelling scheme;
- Option 2: a sectoral, “ad-hoc” approach;
- Option 3: Horizontal EU legislative instrument following a proportionate risk- based approach;
- Option 3+: Horizontal EU legislative instrument following a proportionate risk- based approach + codes of conduct for non-high-risk AI systems;
- Option 4: Horizontal EU legislative instrument establishing mandatory requirements for all AI systems, irrespective of the risk they pose.”
The preferred option was, as you might have guessed, 3+.
This was done to keep compliance costs to a minimum (p. 10). It was suggested that regulatory sandboxes and other measures might help SMEs with compliance. They even estimate costs for those who provide AI systems:
“Compliance with these requirements would imply costs amounting to approximately EUR € 6000 to EUR € 7000 for the supply of an average high- risk AI system of around EUR € 170000 by 2025.”
As well as the AI users there would be a cost in ensuring human oversight:
“Those have been estimated at approximately EUR € 5000 to EUR € 8000 per year. Verification costs could amount to another EUR € 3000 to EUR € 7500 for suppliers of high-risk AI.”
These impacts are explained in detail within: “…Annex 3 of the Impact assessment supporting this proposal.”
It is important to stress:
“Businesses or public authorities that develop or use any AI applications not classified as high risk would only have minimal obligations of information.”
We will get back to the definition of high-risk later, but this is a clear limitation to the scope of the proposal.
For those interested there is a list relating to how this proposal addresses fundamental rights (p.11) with various articles listed. Notably, for me interested in environmental protection it is mentioned: “The right to a high level of environmental protection and the improvement of the quality of the environment (Article 37)” However, it is quickly specified ‘health and safety of people’. So it is challenging to see ecological concerns included to any great extent here.
2.8. Budgetary implications
The proposal explains that this will have an impact and requires commitment from member states:
“Member States will have to designate supervisory authorities in charge of implementing the legislative requirements. Their supervisory function could build on existing arrangements, for example regarding conformity assessment bodies or market surveillance, but would require sufficient technological expertise and human and financial resources. Depending on the pre- existing structure in each Member State,
this could amount to 1 to 25 Full Time Equivalents per Member State.”
This is detailed in a ‘financial statement’ linked to this proposal.
2.9. Monitoring, evaluation and reporting
The proposal lays out the different roles in ensuring responsible implementation.
“The Commission will be in charge of monitoring the effects of the proposal. It will establish a system for registering stand-alone high-risk AI applications in a public EU-wide database.”
As such, high-risk applications will be registered. AI providers need to provide ‘meaningful’ information about their system and conformity assessment on those systems.
AI providers are responsible in informing about ‘serious incidents or malfunctioning’ that constitute breach of fundamental rights obligations.
The framework will be evaluated in a report by the commission five years after it becomes applicable:
“The Commission will publish a report evaluating and reviewing the proposed AI framework five years following the date on which it becomes applicable.” (p. 12)
2.10. Structure of proposal
The overall proposal is structured (with shortened titles) in the following way:
- scope and definitions;
- prohibited AI practices;
- high-risk AI-systems;
- transparency obligations for certain AI-systems;
- measures in support of innovation;
- governance and implementation (three titles included here);
- codes of conduct;
- final provisions.
If you are looking for a summary of all the sections section 5.2 on page 12–16 could be a good place to start.
As could be expected they start with definitions. Interestingly, they also map out the ‘AI value chain’ covering both public and private operators.
They then, differentiate between the AI that create: “…(i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.” These cover manipulative subliminal techniques. They mention it is still relevant to consider existing data protection, consumer protection and digital service legislation.
“The proposal also prohibits AI-based social scoring for general purposes done by public authorities. Finally, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.” (p. 13)
It then defines rules for AI systems that create a high risk to health and safety of fundamental rights of natural persons.
“The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation.”
As such, it does not only depend on function, but also purpose.
The Title III on high-risk AI is important and deserves attention.
Chapter one of Title III in sets classification rules and two main categories of high-risk AI systems:
- “AI systems intended to be used as safety component of products that are subject to third party ex-ante conformity assessment;
- other stand-alone AI systems with mainly fundamental rights implications that are explicitly listed in Annex III.”
This Annex III contains a list of a number of AI systems that have already materialised or are likely to materialise in the future.
Chapter two of Title III, relates to legal requirements for data and data governance for high-risk AI systems. It is argued the minimum requirements are compatible with frameworks adopted by EU’s international trade partners.
Chapter three of Title III, places a clear set of ‘horizontal obligations’ on providers of high-risk AI systems.
Chapter four of Title III, sets a ‘framework for notified bodies’ in conformity assessment procedures.
Chapter five of Title III: “…explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system.”
As a side note considering conformity assessment procedures (as described by Seleon regarding medical devices):
“The conformity assessment procedure is a proof that the general safety and performance requirements are fulfilled.”
They describe ex-ante (before) and ex-post (after) mechanisms.
One important thing to note is that the EU recognising expertise for AI auditing is lacking in the current environment:
“A comprehensive ex-ante conformity assessment through internal checks, combined with a strong ex-post enforcement, could be an effective and reasonable solution for those systems, given the early phase of the regulatory intervention and the fact the AI sector is very innovative and expertise for auditing is only now being accumulated.”
Another aspect that is worth thinking closely about is:
“New ex ante re-assessments of the conformity will be needed in case of substantial modifications to the AI systems.”
This could be one challenging point as ‘substantial modifications’ certainly could be up for discussion.
Title IV relating to transparency obligations applies to systems that:
- (i) interact with humans,
- (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or
- (iii) generate or manipulate content (‘deep fakes’).”
Importantly: “When persons interact with an AI system or their emotions or characteristics are recognised through automated means, people must be informed of that circumstance.”
You cannot analyse a person without that person knowing it is being analysed.
Title V, considers how this framework can support innovation. This is done through the encouragement of national competent authorities to set up regulatory sandboxes to test innovative technologies:
“AI regulatory sandboxes establish a controlled environment to test innovative technologies for a limited time on the basis of a testing plan agreed with the competent authorities.”
This section also considers other measures.
Title VI, sets up governance systems at Union and national level.
“…the proposal establishes a European Artificial Intelligence Board (the ‘Board’), composed of representatives from the Member States and the Commission.”
Who will be in this board? It should be interesting to find out!
The role of the board is to facilitate harmonised implementation of this regulation by contributing to cooperation of the national supervisory authorities and expertise to the Commission.
Title VII, regards monitoring through an EU-wide database for stand-alone high-risk AI systems where providers will have to register their systems before putting them out on the market.
Title VIII, is about monitoring and reporting obligations. Particularly market surveillance authorities:
“Ex-post enforcement should ensure that once the AI system has been put on the market, public authorities have the powers and resources to intervene in case AI systems generate unexpected risks, which warrant rapid action.”
Title IX, is about the creation of the code of conduct. In this they include the possibility for ‘voluntary commitments’:
“Those codes may also include voluntary commitments related, for example, to environmental sustainability, accessibility for persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of development teams.”
Title X, includes measures to ensure effective implementation of the regulation through effective, proportionate, and dissuasive penalties for infringements of provisions.
Title XI, rules for exercise of delegation and implementing powers.
Title XII, outlines the need to update Annex III regularly and evaluate the regulation. It also details a transitional period for the initial date of the applicability of the regulation for smooth implementation. As described in Article 85.2: “This Regulation shall apply from [24 months following the entering into force of the Regulation]”.
Discover Past Posts