All Europeans are entitled to the right to freedom and expression under the EU’s Charter of Fundamental Rights. Article 11 states that “Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

The problem we face in modern day society comes from a rise in the social media platforms created by technology companies which now provide forums and ‘safe spaces’ for individuals to express their opinions, however offensive they may be.  Unfortunately, in recent times the result has been a dramatic increase in online hate speech. Already this year we have seen Austria’s new year baby, Asel and the winner of the Miss Belgium competition, Angeline Flor Pua become the targets of online hate speech. In February, UK celebrity Katie Price gave evidence to the House of Commons about the online abuse experienced by her disabled son and her petition for a new online hate speech law and register of offenders. Online hate speech is growing as a problem for our societies as it continues to harm individuals and go unpunished– but what is the right solution? Should social media companies be doing more to prevent this? Is new legislation the answer? Are sanctions for failing to remove content effective? And how do we preserve the right to free speech in addressing the problem of online abuse?

In 2016, Microsoft, Twitter, Facebook and YouTube (with Instagram and Google + planning to join shortly) signed the EU’s voluntary Code of Conduct ‘Countering Illegal Hate Speech Online’, committing to review most complaints about offensive comments or posts within a 24-hour timeframe. In January this year the Commission disclosed the results of the third round of monitoring of the Code. It found that IT companies removed 70% of illegal hate speech which was notified to them (up from 59% in round two and 28% in round one) and there have been positive steps made by the companies.

The approach favoured by Commissioner Jourová is not to target a 100% removal rate because preservation of the right to free speech is so pivotal and to do so she believes would impede on this right. She has resisted calls to for a more aggressive approach akin to that taken by the UK, France and Germany and the promotion of EU-wide legislation for the same reason of protecting Article 11. However, she has said she would be willing to legislate if in future the Code was failing to produce adequate results or there was too much fragmentation as member states draft their own national rules. Recently the leaked Commission document “Recommendation for measures to effectively tackle illegal content online” calls for online platforms to remove posts promoting terrorism within one hour. This document is not official nor a binding legislative proposal but certainly indicates that recommendations from the Commission are on the horizon and the media have reported an announcement will be coming soon.

In June 2017, Germany passed the NetzDG law which came into force in October and imposes fines of up to €50m on companies who do not remove hate speech quickly enough. This law has proved controversial in Germany with many arguing that it could become inadvertent censorship as they believe it is wrong for companies to make the decision as to whether a post or comment is unlawful. This is certainly an extreme approach taken by the German Government, but it places a high burden of compliance onto the technology companies which will require them to be more proactive in dealing with complaints. Facebook for example have recruited 500 additional staff in Germany to deal with complaints as part of the new legislation, which can only be a positive thing.

In the UK, Prime Minister May has called for increased policing of online hate speech and recommended that the impetus be put back on the technology companies to tackle online abuse. Specifically, May wants to legislate for internet companies to remove hateful content that is aligned to terror organisations within 2 hours of being discovered or face financial sanctions. Government plans include the launch of an Internet Safety Strategy, regular reporting by companies to set a baseline on how to best reduce online abuse, launch of a Social Media Code and an annual report on internet safety and transparency. The Government also announced funding of new software which would detect up to 94% of videos automatically posted online by Islamic State.  YouTube and Facebook already use AI to detect illegal material and it is important that the giants share their tools with smaller companies and cooperate so the tech community can tackle the problem together.

Freedom of speech and of expression is a fundamental cornerstone of our societies and whilst it must be preserved, there does need to be a line drawn between what is acceptable and what is not acceptable, which is recognised in law. We have seen the different positions adopted across the EU and will need to monitor how effective they are over the coming months. So far, it seems that the approach taken by the Commission has resulted in significant progress and positive action taken by technology companies since they joined the Code and hopefully this trend will continue. With the German law so recent and the UK yet to fully implement its plans we will have to wait and see how effective the legislative approach is.