Social media users will have more control over what they see online and who can interact with them as the government steps up the fight against anonymous trolls
New measures added to Online Safety Bill in fight against anonymous abusers
Main social media firms will have to give people the power to control who can interact with them, including blocking anonymous trolls
Platforms will also need to offer tools to give people more control over what posts they see on social media
To put more power in the hands of people using social media, the biggest and most popular firms will be required to provide users with tools to tailor their experiences and give them more decision-making over who can communicate with them and what kind of content they see.
The government recognises too many people currently experience online abuse and there are concerns that anonymity is fuelling this, with offenders having little to no fear of recrimination from either the platforms or law enforcement.
Over the past year people in the public eye, including England’s Euro 2020 footballers, have suffered horrendous racist abuse. Female politicians have recieved abhorrent death and rape threats, and there is repeated evidence of ethnic minorities and LGBTQ+ people being subject to coordinated harassment and trolling.
So today the government is confirming it will add two new duties to its Online Safety Bill to strengthen the law against anonymous online abuse.
The first duty will force the largest and most popular social media sites to give adults the ability to block people who have not verified their identity on a platform. A second duty will require platforms to provide users with options to opt out of seeing harmful content.
Digital Secretary Nadine Dorries said:
Tech firms have a responsibility to stop anonymous trolls polluting their platforms.
We have listened to calls for us to strengthen our new online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.
People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.
The vast majority of social networks used in the UK do not require people to share any personal details about themselves - they are able to identify themselves by a nickname, alias or other term not linked to a legal identity.
Removing the ability for anonymous trolls to target people on the biggest social media platforms will help tackle the issue at its root, and complement the existing duties in the Online Safety Bill and the powers the police have to tackle criminal anonymous abuse.
First Duty - User Verification and Tackling Anonymous Abuse
The draft Online Safety Bill already places requirements on in-scope companies to tackle harmful content posted anonymously on their platforms and manage the risks around the use of anonymous profiles. This could include banning repeat offenders associated with abusive behaviour, preventing them from creating new accounts or limiting their functionality.
Under a new duty announced today, ‘category one’ companies with the largest number of users and highest reach - and thus posing the greatest risk - must offer ways for their users to verify their identities and control who can interact with them.
This could include giving users options to tick a box in their settings to receive direct messages and replies only from verified accounts. The onus will be on the platforms to decide which methods to use to fulfil this identity verification duty but they must give users the option to opt in or out.
When it comes to verifying identities, some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.
Alternatively, verification could include people using a government-issued ID such as a passport to create or update an account.
Banning anonymity online entirely would negatively affect those who have positive online experiences or use it for their personal safety such as domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality.
The new duty will provide a better balance between empowering and protecting adults - particularly the vulnerable - while safeguarding freedom of expression online because it will not require any legal free speech to be removed. While this will not prevent anonymous trolls posting abusive content in the first place - providing it is legal and does not contravene the platform’s terms and conditions - it will stop victims being exposed to it and give them more control over their online experience.
Users who see abuse will be able to report it and the bill will significantly strengthen the reporting mechanisms companies have in place for inappropriate, bullying and harmful content, and ensure they have clear policies and performance metrics for tackling it.
Edleen John, The FA’s Director of International Relations, Corporate Affairs and Co-partner for EDI, said:
The FA welcomes the news that the Government will be strengthening the Online Safety Bill to protect users from anonymous online abuse. For too long, footballers and other participants across the game have been subjected to abhorrent discriminatory abuse from those who hide behind a cloak of anonymity, which has perpetuated a culture of impunity online. This needs to stop.
The measures announced by the Government are a helpful first step to put the onus and responsibility on social media companies to create a safe space for all their users, and to give people the option to control who they interact with and what they see online. We look forward to the Online Safety Bill being introduced to the House of Commons in the near future.
Second duty - giving people greater choice over what they see on social media
The bill will already force in-scope companies to remove illegal content such as child sexual abuse imagery, the promotion of suicide, hate crimes and incitement to terrorism.
But there is a growing list of toxic content and behaviour on social media which falls below the threshold of a criminal offence but which still causes significant harm. This includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation. Much of this is already expressly forbidden in social networks’ terms and conditions but too often it is allowed to stay up and is actively promoted to people via algorithms.
Under a second new duty, ‘category one’ companies will have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content where it is tolerated on a platform.
These tools could include new settings and functions which prevent users receiving recommendations about certain topics or place sensitivity screens over that content.
Notes to Editors
Ofcom will set out in guidance how companies can fulfil the new user verification duty and the verification options companies could use. In developing this guidance, Ofcom must ensure that the possible verification measures are accessible to vulnerable users and consult with the Information Commissioner, as well as vulnerable adult users and technical experts.
Under the proposed new user empowerment duty, for harmful content that category one companies do accept, they would have to provide adult users with the tools to control what types of legal but harmful content they see. This could include, for example, content on the discussion of self-harm recovery which may be tolerated on a category one service but which a particular user may not want to see.
Further information on how companies will be able to fulfil the new identity verification requirements will be set out by the regulator Ofcom in codes of practice.