AI experts warned MEPs of the gaps in the EU Commission’s proposed Artificial Intelligence Act. While the intention to preserve fundamental rights and freedoms of EU citizens is most welcome, the proposal reiterates our limited understanding of artificial intelligence and that there is more to be done to close loopholes.

AI

The European Commission proposed legislation that will regulate the use of certain Artificial Intelligence (‘AI’) systems and services in order to preserve the fundamental rights and freedoms of EU citizens, such as privacy. However, experts warned that the proposal reflects our limited understanding of AI risks to date, and that there are many gaps that can lead to the circumvention of legislation.

The definition of AI sparked the most debate during the public hearing. Professor Stuart Russell of the University of California, Berkley described it as unfinished, needing to cover both the deployed object and the pre-deployment generator that creates the deployed object. Professor Russell went on to say that Artificial Intelligence researchers find the exemption of general-purpose AI from the Act puzzling; just because it does not have a specific or identifiable purpose, this does not mean it should have no place under the AI Act at all.

By excluding general purpose AI systems, the liability shifts from non-EU firms that own the AI systems on which the applications are built, onto the European companies that use them in specific applications. Max Tegmark of Future of Life Institute used the example of Airbus buying an engine from a non-EU company, not knowing how it works or being able to look inside of it. If the plane then crashes because of an engine malfunction, then they will be the only one liable.

Max Tegmark continued to set out the importance between regulating based on purpose and regulating based on outcome, as most harm caused by AI systems is inadvertent. Experts agreed that general purpose systems must be included so that when the unintentional harm occurs, the makers of the system are responsible. Andrea Renda of the Centre for European Policy Studies said there needs to be a distinction between those who develop AI and those who deploy it, making it possible to apportion liability.

When asked whether the Act is future proof enough, Max Tegmark confirmed that it was not. A broad definition which is not too prescriptive was key for the experts. Catelijne Muller of ALLAI explained that an exhaustive list of techniques runs the risk of confusion, circumvention and loopholes, and we should look to the characteristics of AI systems rather than their intended use.

Andrea Renda stressed the importance of governance, rather than trying to get all high-risk applications right under the Act, and only a system that is agile and flexible enough to handle risk classification, guidance and algorithmic monitoring can offer this. Andrea Renda confirmed that the EU was too bureaucratic to be that system.

However, the experts did not hesitate to remind MEPs of the importance of AI regulation. The Artificial Intelligence Act has the power to create a level playing field between competitors, enhance the quality of AI and raise the bar globally (i.e. ‘the Brussels effect’). It mitigates reputational risk and financial claims, as well as incentivises AI safety and engineering. This leaves us with the conclusion that although more can be done, this regulation will not stifle innovation; it fosters it.

For a deep dive into the content of the AI Act, see our previous article here.