Ad Code

Responsive Advertisement

The UK recently announced new internet regulations, and things only get more difficult from here

 The UK recently announced new internet regulations, and things only get more difficult from here



Following the arduous multiyear passage of the Online Safety Act through the UK's lawmaking process, regulator Ofcom has published its first guidelines for how tech firms can comply with the mammoth legislation. Its proposal outlines how social media platforms, search engines, online and mobile games, and pornography sites should deal with illegal content such as child sexual abuse material (CSAM), terrorism content, and fraud.

Today's recommendations are being presented as suggestions so that Ofcom may gather feedback before they are approved by the UK Parliament at the end of next year. Even so, the specifics will be entirely voluntary. Tech companies can ensure they're following the laws to the letter. Still, they can also take their own strategy as long as they demonstrate compliance with the act's overall rules (and, presumably, are willing to battle their case with Ofcom).

 

"What this does for the first time is put a duty of care on tech firms to have a responsibility for the safety of their users," Ofcom's online safety chief, Gill Whitehead, tells The Verge in an interview. "When they become aware that there is illegal content on their platform, they have got to get it down, and they also need to conduct risk assessments to understand the specific risks that those services might carry."


The goal is to require sites to be proactive in stopping the distribution of unlawful information rather than just playing whack-a-mole after the fact. According to Claire Wiseman, a lawyer who specializes in technology, media, telecoms, and data, the goal is to encourage a shift from a reactive to a more proactive approach.

According to Ofcom, some 100,000 services may be subject to the broad guidelines, however, only the largest and most risky platforms will be subject to the most stringent standards. These platforms should implement measures such as not allowing strangers to send direct messages to children, employing hash matching to detect and remove CSAM, retaining content and search moderation teams, and providing mechanisms for users to report harmful information, according to Ofcom.

Many of these procedures are currently followed by large tech platforms, but Ofcom hopes to see them used more consistently. "We think they represent best practice of what's out there, but it's not necessarily applied across the board," Whitehead said. "Some firms are applying it sporadically but not necessarily systematically, and so we think there is a great benefit for a more wholesale, widespread adoption."

There is one notable exception: the platform known as X (previously Twitter). The UK’s efforts with the legislation far precede Elon Musk’s acquisition of Twitter, but it was passed while he sacked big sections of its trust and safety teams and presided over a relaxing of moderation rules, which might put X in conflict with regulators. According to Ofcom's requirements, users should be able to easily block users – but Musk has openly indicated his plan to eliminate X's block feature. He has struggled with the EU over similar laws and is said to have considered leaving the European market to escape them.

When asked if X had been cooperative in conversations with Ofcom, Whitehead declined to comment but said the regulator had been "generally encouraged" by the response from IT firms in general.

Other illegal damages covered by Ofcom's regulations include content that encourages or helps suicide or serious self-harm, harassment, revenge porn and other sexual exploitation, and the supply of drugs and firearms. Search engines, for example, should include "crisis prevention information" when users input suicide-related queries, and when corporations change their recommendation algorithms, they should do risk assessments to ensure that they are not amplifying illegal content. If users feel a site is not following the guidelines, Whitehead says they will be able to report directly to Ofcom. If a company is found to be in violation, Ofcom has the authority to assess fines of up to £18 million (about $22 million) or 10% of global revenue, whichever is greater. Offending websites can even be blocked in the United Kingdom.

Today's consultation addresses some of the least contentious areas of the Online Safety Act, such as restricting the distribution of content that was previously banned in the UK. As subsequent updates are released, Ofcom will have to address more sensitive issues, such as legal but damaging content for minors, underage access to pornography, and protections for women and girls. Most contentiously, it will have to interpret a part that critics argue may fundamentally weaken end-to-end encryption in messaging apps.

This particular section gives Ofcom the authority to mandate that online platforms identify malicious code using "accredited technology." However, digital rights organizations, WhatsApp, and other encrypted messaging services claim that this monitoring would necessitate breaching the encryption systems of the apps and compromising user privacy. Whitehead said Ofcom will consult on this matter in the upcoming year, so it's unclear how this will affect encrypted texting in its entirety.

 

Artificial intelligence is a different technology not discussed in today's session. However, it doesn't imply content created by AI won't be subject to the guidelines. According to Whitehead, the Online Safety Act aims to address online damages in a "technology-neutral" manner, regardless of how they were caused. Hence, a deepfake used to commit fraud would be in scope due to the fraud itself, while AI-generated CSAM would be in scope simply because it is CSAM. Whitehead asserts, "We're regulating the context, not the technology."

 

While Ofcom says it’s trying to take a collaborative, proportionate approach to the Online Safety Act, its rules could still prove onerous for sites that aren’t tech juggernauts. BBC News notes that Ofcom’s initial set of guidance is over 1,500 pages long. The Wikimedia Foundation, the nonprofit behind Wikipedia, tells The Verge that it’s proving increasingly challenging to comply with different regulatory regimes across the world, even if it supports the idea of regulation in general. “We are already struggling with our capacity to comply with the [EU’s] Digital Services Act,” the Wikimedia Foundation’s VP for global advocacy, Rebecca MacKinnon, says, pointing out that the nonprofit has just a handful of lawyers dedicated to the EU regulations compared to the legions that companies like Meta and Google can dedicate.

 

As a platform, we acknowledge our obligations, but as MacKinnon puts it, "It's problematic when you're a nonprofit and every hour of work is zero-sum."

According to Ofcom's Whitehead, the Digital Services Act and the Online Safety Act are more like "regulatory cousins" than "identical twins," meaning more work is involved in complying with both. She points to Ofcom's efforts to establish a global network of internet safety regulators as evidence of the regulator's efforts to facilitate cross-border operations.

Enacting the Internet Safety Act amid a contentious period in British politics was challenging enough. Perhaps the real difficulties are just getting started, though, as Ofcom starts to fill in the details.

 

Post a Comment

0 Comments