On the 31st of May 2016 the EU’s committee on Legal Affairs issued its draft report with recommendations to the Commission on the area of Civil Law Rules on Robotics.
The report is a comprehensive list of recommendations on what the final civil rules on robotics should contain, from the mundane standards for manufacturing quality, to the more science-fiction, noting Isaac Asimov’s famous Laws of Robotics. The report has recently been debated throughout various EU committees and one area that has been hotly debated has been that of the algorithm.
For those “not in the know”, an algorithm in computer science is a set of detailed instructions which results in a predictable end-state from a known beginning. Algorithms are increasingly being used to automate what used to be human processes. They are potentially more fair and objective, but they are not automatically so.
On the whole, the successes of these fundamental computing methods, despite their ubiquity, often go unnoticed by the general public. A few more novel examples of algorithmic success may however spring to mind. I am of course referring to the Deep Blue computer programme which, at a cost of just £36.50, defeated the world’s best chess player in 2006; the computer Watson which completed the American game show Jeopardy and beat two former champions in 2011; and the AlphaGo programme which, in March 2016 and over a five game series, beat the top Go player in the world 4 games to 1 (the success of this latter case being made all the more impressive when you consider that there are more possible positions in a game of Go than there are atoms in the known universe).
Algorithmic failures are however more readily reported and include such famous examples as; Microsoft’s twitter-bot Tay, designed to learn from users but which had to be shut down after it became racist and sexist within 24hours of going live; the Google photo application and Nikon software which mislabelled people as “gorillas” and “blinking” respectively, due to their ethnicity, and; the algorithm behind Facebook’s newsfeed which was exposed as not being neutral (as Mark Zuckerberg had contended) but rather bias to “maximize the amount of engagement you have with the site and keep the site ad-friendly”.
When one considers then the all-pervasive nature of these algorithms, the scope which they have for such great successes and yet also their ability to make such fundamental failures, one can understand the increasing debate surrounding the need for accountability and transparency of algorithms.
Is it enough, for example, to say that the mislabelling in the above mentioned Google and Nikon case was the result of an algorithmic data problem, or rather should we say that mislabelling occurred as a result of an algorithm which learnt by being fed data chosen by the white male engineers which created it?
The question of accountability is complicated further when we consider just a few of the problems which are typified by this niche area. The problem of the so-called “black-box” algorithms for example, which are algorithms considered so complex that one is unable to ascertain why it has made the decision which it has; the problem of algorithms that learn and adapt by themselves which may make it harder to find a “human in the loop” for accountability; the fact that numerous people are often responsible for algorithms; and the ethical and legal implications for how one holds an algorithm accountable and to what standard.
In the context of accountability, it is also then important to ask the question as to whether regulation is the best way to ensure transparency and if so, then how one is to broach this topic without stifling innovation. Should it be necessary, for example, to regulate research?
These are just some of debates currently being held by the EU throughout several committees in relation to the Parliament’s report on Civil Law Rules on Robotics.
The report is 22 pages long, and forms the first step in the EU law-making process. It is an official request for the Commission to submit to the European Parliament a proposal for civil law rules on robotics.
Historically, the EU has always drawn important lines regarding what it means to be a human based on how computational technologies must adhere to certain restrictions on fully automated processing – injecting a human in the loop. In 2013 for example, the former first lady of Germany, Bettina Wulff, successfully sued Google for defamation over unfavourable terms that autofilled in the search box when users typed her name. The EU reached this decision despite Google’s insistence that the predictions were made “algorithmically… without any human intervention”.
The Commission’s report will be voted on in committee at the 29th of November and the vote in plenary will take place in either December or in January next year. Currently, it is still unclear what the final report will look like or whether the Commission will be brave enough to broach some of the difficult questions around algorithmic accountability. One thing that is certain however is that it will be a long time before the code to regulating the algorithm is cracked.