Parliament is inquiring into the spread of so called ‘fake news’ and is also looking into the harms posed to young people on social media platforms. There’s potential for social media platforms, such as Snapchat and Facebook, to face strict regulations in a bid to protect young people from harm – but is such regulation the right step to take?

Snapchat is expected to argue to MP’s sitting on the Commons Culture Committee that social platforms should not be held fully responsible for how users interact on their platforms, but that there should also be a level of regulation by Government.

The idea is gaining traction that these social media companies are in fact publishers rather than simply platforms; therefore, they should be subject to appropriate regulation. The opposing argument is that social media companies are in the business of providing platforms; and that tougher regulation would stifle their capacity for technological innovation.

I take a more nuanced view. While social media companies can in some respects be classified as publishers (they host advertisements and in many cases commission their own content), their primary function is to provide a public space. They’re free for all to access and the principle of social networking is to interact with your peers. Like a public space that you would physically attend, a social network is there as the means to contact friends and family. Thus it comes with rights such as freedom of expression and free speech.

Social media companies have also become so large that the prospect of regulating them and the content posted on them becomes near impossible. For instance, Facebook has upwards of 35 million active monthly users in the UK alone. With in excess of 35 million people posting, commenting, sharing and messaging each month on one single platform – how can we realistically expect Facebook to stay on top of everything on the platform that could be considered illegal and/or immoral? The number of employees it would take to monitor only the UK’s output would be huge, and something would still always be missed.

I believe the current system of reporting posts to Facebook and those posts then being reviewed and potentially taken down/forwarded to relevant authorities, is the right system. Facebook has clear guidelines on what content is and is not allowed on the platform, but perhaps these could be strengthened further and the consequences for breaching the user agreement could be reinforced. Social networks should also cooperate with law enforcement much more than they currently do, and we should make it easier for law enforcement to access any illegal material posted on the platform and subsequently track down the person(s) that posted it.

We’ve found ourselves in a somewhat difficult situation. You’d greatly struggle to find anyone who truly believes children shouldn’t be protected from harm on social media. At the same time however, we should not (and would likely struggle to) limit the rights of adults to use what are effectively public spaces for the sharing of uncensored ideas. We’d risk coming close to Chinese-style censorship if we introduce regulation of what can and cannot be done online and start punishing the platforms for what their users do.

Perhaps a more appropriate answer is to strengthen existing laws surrounding hate speech and promoting extremist material and the like, making them more easily applied to the online world. We also need to make the consequences for misuse of social media clear to the public. It shouldn’t be the platforms that get punished for what other people are using the platforms for – they’re providing a public space, what people choose to do with that space is down to them.