White: Social Media ‘Single Biggest Source Of Online Harm’

Social media platforms are the “single biggest source of online harm” and regulation is needed to combat this while preserving freedom of speech, Ofcom chief executive Sharon White has said.   

Speaking at the Royal Television Society London conference this week, Ms White highlighted the difference in regulatory regimes for broadcast and online, saying that consumers were being forced into a “standards lottery.”

Ms White, who also announced new Ofcom research highlighting the harmful effects of social media, said: “The broadcasting and online worlds are competing under different conditions, even as the online world takes up an ever greater share of our time. This has profound consequences for viewers – especially for children, who may well not distinguish between the two.

“Without even knowing it, viewers are watching the same content, governed by different regulation in different places, or by none at all. This is a standards lottery. If protection matters, and we all believe it does, this cannot be our message to viewers – ‘choose your screen, and take your chances’.

“Now there are welcome signs that the technology giants are increasingly alive to their responsibilities. Facebook and YouTube are hiring around 30,000 content moderators this year. But trust in them is already weakening. Our research shows that people see social platforms as the single biggest source of online harm – and most people want the rules to be tighter.”

“The role of regulators is evolving too. New European laws will give national regulators some oversight of video-sharing platforms, requiring companies such as YouTube to address child harm, terrorism and hate speech. But most online content will remain unregulated, including words and images on social media, and videos that aren’t on sharing platforms.

“The UK Government is already considering how to level that playing field. And the DCMS select committee has suggested that broadcasting standards, as defined by Parliament and implemented by Ofcom, should provide the basis for setting standards online.”

She concluded: “I hope our paper today might prove useful to policymakers – as they work to curtail the internet’s harmful aspects, while preserving its powerful benefits to society, culture, trade and freedom of expression.”

According to Ofcom’s ‘Internet users’ experience of harm online’ research, social media is the most harmful online environment with 71 per cent of those who had experienced harm online through interaction with other internet users naming social media as the place where it had taken place, according to Ofcom.

The research, carried out by Ofcom with the Information Commissioner’s Office, found that social media was named as the main place where online harm had been experienced by internet users across all categories surveyed – harm relating to data/ privacy, content, and interactions with other users.

Out of all those who believed there was a regulator for social media sites, 40 per cent – the highest number – didn’t know who it was and 28 per cent – the next highest – thought it was Ofcom.

Fifty-five per cent of respondents who believe there is no regulation other than the law, the highest for any platform, said there should be more regulation of social media websites with just three per cent saying there should be less.

By contrast, among those who knew that there is a regulator that sets rules for national newspaper websites, a clear majority (66 per cent) believe the current level of regulation should stay the same.

Conducted by Kantar, the research found that protection of children is the leading area of concern, with potential harms such as exploitation, inappropriate content, and bullying, harassment or trolling cited by some respondents.

Around eight in ten adult internet users (79 per cent) have concerns about aspects of going online, while almost half (45 per cent) have experienced some form of online harm. 

Overall, 45 per cent of respondents said they had experienced online harm. These included 20 per cent who said they had received spam emails or communications, 14 per cent had experience of viruses or malicious software, 13 per cent had experienced scams, fraud or identity theft, and 10 per cent had seen fake news or disinformation online.

When it comes to reporting harmful content encountered online, one in five respondents said they had done so, with younger adults more likely to do this.

Almost half of those who said they had reported harmful content were aged 16-34, with only 16 per cent over the age of 55. Illegal sexual content is the type of content most likely to be reported, followed by content that promotes terrorism and racism.