On Wednesday 21 April 2021, the European Commission released its proposal regarding legislation of AI within the European Union. It is accompanied by annexes and Communication on Fostering a European approach to Artificial Intelligence.
The proposal presents a horizontal regulatory approach to AI systems centred on a risk based regulatory approach that is intended to be comprehensive and future-proof. In its attempts to achieve proportionality, regulatory burdens are only imposed when an AI system poses significant risks to health and safety or fundamental rights of a person.
As such, the definition of ‘high-risk’ is crucial as those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market. There are also prohibitions on certain AI practices and harmonised transparency rules for interacting with natural persons.
An AI system is defined in the proposal as: “Software that is developed with one or more techniques and approaches in Annex 1 and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” The said annex lists machine learning approaches, logic and knowledge based approaches and statistical approaches.
The AI system is considered high-risk if:
- it is intended to be used as a safety component or is itself a product covered by the Union harmonisation legislation listed in Annex II (see additional information for abbreviated note on this);
- the product whose safety component is in the AI system or the AI system itself as a product is required to undergo a third-party conformity assessment with a view to placing on the market or putting into service of that product pursuant to the Union harmonisation legislation above.
The following systems are also considered high-risk:
- Biometric identification and categorisation of natural persons (real-time and post).
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, workers management and access to self-employment
- Access and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
Importantly, the proposal also lists AI systems whose use is prohibited: deploying subliminal techniques, exploiting vulnerabilities of a specific group of persons, social scoring and real-time’ remote biometric identification systems. The latter can be used in public spaces for law enforcement purposes under strict conditions.
Providers of high-risk AI systems have several requirements under the proposed regulation. They need to set up and run a risk management system and make sure that the data they use in developing them are of high quality. They also need to draw up technical documentation of a high-risk system and keep records of its operations (logs). The providers also have to design AI systems in a way that ensures transparency about them and that would allow human oversight. The AI systems must also be resilient as regards errors or faults or attempts to exploit system’s vulnerabilities. The providers must also have a quality management system and must make sure that the AI system undergoes relevant conformity procedure before it is placed on the market.
The proposal also includes the obligations of users of AI systems, in particular the obligation to report any potential risk associated with its use and to keep the automatically generated logs to the extent possible.
The Commission proposes that each member state establish a notifying authority. It would be responsible for setting up and carrying out the necessary procedures for the assessment, designation, and notification of conformity assessment bodies and for their monitoring. Each member state will also be expected to designate or set up a competent authority to ensure compliance with the regulation. From among the competent authorities, member states have to choose a supervisory authority which will act as a notifying authority and market surveillance authority.
Finally, the proposal establishes the European Artificial Intelligence Board which will provide advice and assistance to the Commission in order to support its cooperation with the national supervisory authorities.
No comments yet