On 6 September, two separate events were held in the Parliament - Protecting European Democracy in a Post Truth Society, hosted by ALDE, and Fake News: Political and Legal Challenges, organised by S&D. Both events addressed the change in how information has been disseminated, used and abused throughout recent election cycles. The speakers approached the topic from a variety of angles, with some looking at the phenomenon from a structural perspective - questioning, for example, whether Fake News was a new problem, or how could it be distinguished from other reporting - with others, including professionals working on responses to the problem, discussing the topic from a technical and legal perspective.

Both events kicked off by asking how we got to this point; as the first speakers - MEP Marietje Schaake and Professor Arnaud Mercier, academic - highlighted, critical stories are not per se fake news. What needs to be addressed, the panellists stressed, is the tools available, which allow the spread of maliciously false stories, often devoid of any logical basis and often written by ‘Bots’. As the panels also highlighted, this is a phenomenon that came to the fore in the last few voting cycles - including the Italian and Brexit referenda, as well as the US, French and Italian elections. As Lisa-Maria Neudert, University of Oxford, and Ingrid Brodnig, journalist, noted in their respective panels, the statistical evidence is clear. Referring to a study of 27 million tweets, Ms. Neudert highlighted that the ratio of ‘junk news’ to information contained therein was circa 7 to 1 in France - but 1 to 1 in the US. The reasons for this are unclear, although Ms. Brodnig suggested that the closeness of the US Presidential race was perhaps a factor. The German election, by contrast, where Chancellor Merkel has consistently enjoyed an advantage in the polls, proved to be less fertile ground for such a level of disinformation.

A point made by both Prof. Mercier, and also by David Kaye, UN Special Rapporteur on Protection of Freedom of Speech, is that the tech companies, whose news feeds are being increasingly relied upon by users for up-to-date information, could no longer be considered to provide ”just a platform”, but their products/services were now considered media, and therefore such companies should be held to the same standards as other media outlets. On the question of how to address the issue, opinions diverged. Morten Løkkegaard, MEP, and Professor Vincent Hendricks, University of Copenhagen, agreed that regulation was not necessarily desirable, while Professor Prabhat Agarwal, DG Connect, commented that Fake News is bad, ”but a Ministry of Truth would be worse”.

Another common point made by speakers was that fake news is a consequence of tech revenue systems; as Ms Brodnig described it, ”angry people click more”. It was interesting, therefore, to hear from Thomas Myrup Kristensen, Facebook Managing Director of EU affairs, who explained how Facebook put into practice a policy of clearly identifying news sources as being news and highlighted that Facebook had expanded the tools available to users with which to identify content as offensive.

Two events, multiple panels, but ultimately similar findings; the term ‘Fake News’ is understood as a phrase to describe untrue stories which are spread with the aim of deliberately and negatively influencing opinion of a candidate. The panels agreed that, although the phrase may have been popularised recently, ‘Fake News’ is not a new concept, however there is an issue that the term is beginning to be loosely understood to encompass legitimate criticism. Speakers discussed the way in which social media is becoming an increasing influencer in election campaigns and changing the way in which elections campaigns are reported in the news as social media platforms are becoming, to a larger and larger extent, a primary source of news for consumers. As those same platforms have an incentive to generate clicks, this leads to consumers seeing more and more of the same content which algorithms know that they will react to, thus forming a ‘bubble’ of self-reinforcing stories - often of dubious credibility.

Whilst legal methods are being put in place to require social media platforms to act when harmful content is reported - or face high penalties - speakers suggested that tech companies should be more proactive in weeding out false reports so that consumers and the electorates at large are better informed, thus better equipped to make decisions affecting their livelihoods and those of their fellow citizens.