The Online Safety Act, which originated in 2017 after the death of 14-year-old Molly Russell, has evolved to include provisions which introduce a new criminal offence for “knowingly sending false information” that causes harm and criminalising sending “seriously threatening messages” online. Not just for content targeting children but adults as well.
The Act also grants significant power to Ofcom, the government-approved regulator, and exempts “recognised news publishers” from fines for potentially harmful material, while independent journalists, citizen journalists and social media commentators face content restrictions.
It gives significant censorship powers to a single civil servant, Ofcom’s Melanie Dawes, and grants police chiefs the power to arrest citizens for sending “false communications” or “threatening messages” online, which has been used to restrict free speech.
The Act compromises privacy rights by forcing online platforms to deploy technology that detects and removes illegal content, even within end-to-end encrypted messages, and requires age verification for users.
Aiming to make Britain the “safest place” to be online, the paper outlined plans for a voluntary code for social media companies to tackle abuse, annual reports on harmful content and responses, and a levy on tech firms to fund awareness campaigns.
Education was also key – integrating digital literacy into school curricula for parents, children and caregivers alike.
The initial recommendations placed some burdens on social media firms but they were far from draconian. They contained some inconvenient safeguards but ones arguably needed.
Then, the tide shifted.
By April 2019, Theresa May’s Home Office and Department for Digital, Culture, Media and Sport were involved, co-publishing the ‘Online Harms White Paper’. Now, with ministers citing Molly’s fate, the scope of their plans expanded.
It was here we first saw proposals of a legal obligation for companies to take reasonable steps to safeguard users from illegal content, underage exposure to legal content and – the big one – “harmful but legal content.”
The mandate was widened to seemingly include almost everything.
They also proposed the establishment of an independent regulator to oversee compliance, develop codes of practice and have the authority to impose sanctions on companies failing to meet their new rules.
This is what came into force on Monday, 17 March 2025, with technology companies needing to complete compulsory content risk assessments, showing how their algorithms downgrade certain content.
Failure to do so could result in fines of up to £18 million or 10% of their worldwide revenue.
After subsequent draftings of the bill in 2021 and legislative amendments in parliament throughout 2022, the bill, dubbed ‘The Online Safety Act’, passed through parliament and received Royal Assent in October 2023.
Campaigners successfully pressured representatives to withdraw the “harmful but legal” provision, citing its vague and subjective nature that would have no doubt had a damning effect on online speech.
It marked a solid win. But while attention fixated on the former, the government, civil service and stakeholders successfully pushed through more, let’s say, insidious clauses.
One of those was Section 179, which introduced a brand new criminal offence for “knowingly sending false information” that causes “non-trivial psychological or physical harm.”
The provision obviously intends to prevent things like cyber-bullying. What we didn’t know was that it would be used by police forces to arrest citizens for speculation.
You read that right.
The story of Bernadette “Bernie” Spofforth is a case in point.
On 29 July 2024, Bernie misidentified Southport child murderer Axel Rudakubana as Ali-Al-Shakati on X (formerly Twitter) hours after the heinous attack. About a week later, Cheshire Police arrested her for “stirring up racial hatred” and “false communications.”
No comments:
Post a Comment