The UK’s newly empowered Internet Content Regulatory Authority has published the first set of draft codes of practice under the Online Safety Act (OSA) which… It became law late last month.
More codes will follow, but this first set – which focuses on how user-to-user (U2U) services are expected to respond to different types of illegal content – provides guidance on how Ofcom should think about shaping and implementing sweeping measures in the UK’s new Internet rule book. Main field.
Ofcom says its first priority as “online safety regulator” will be protecting children.
The draft recommendations on illegal content include suggestions that larger, more risky platforms should avoid providing lists of suggested friends to children; Child users should not appear in others’ contact lists; You must not make children’s contact lists visible to others.
It is also suggested that accounts outside a child’s contact list should not be able to send them direct messages; Children’s location information should not be visible to other users, among a number of recommended risk mitigation measures aimed at keeping children safe online.
“Regulation is in place, and we are wasting no time setting out how we expect technology companies to protect people from unlawful harm online, while preserving freedom of expression. Dame Melanie Dawes, chief executive of Ofcom, said in a statement. “We have told children about the risks they face, and we are determined to create a safer online life for young people in particular.”
“Our figures show that most high school children have been contacted online in a way that may make them feel uncomfortable. For many, this happens frequently. If these unwanted tactics happened too often in the outside world, most parents would not want their children to leave home. However, “That, somehow, in cyberspace, these things have become almost routine. This cannot continue.”
The OSA imposes a legal duty on digital services, large and small, to protect users from the risks posed by illegal content, such as CSAM (child sexual abuse material), terrorism and fraud. Although the list of priority crimes in the legislation is long – also including the misuse of intimate images; Stalking and harassment; and electronic flashing, to name a few.
The exact steps that services and platforms within scope must take to comply are not set out in the legislation. Ofcom also does not specify how digital companies deal with all types of illegal content risks. But the detailed codes of practice it is developing aim to provide recommendations to help companies make decisions on how to adapt their services to avoid the risk of being found to be in breach of a regime that enables them to impose fines of up to 10% of global annual turnover on companies. Violations.
Ofcom avoids a one-size-fits-all approach – with some of the more expensive recommendations proposed in the draft code for the largest and/or riskiest services only.
It also states that it is “likely to have the closest supervisory relationships” with “the largest and riskiest services” – a line that should bring a degree of comfort to startups (which generally will not be expected to implement many of the recommended recommendations). mitigation as more established services). It defines “large” services in the context of OSA as those with more than 7 million monthly users (or around 10% of the UK population).
“Businesses will be required to assess the risk of users being harmed by illegal content on their platforms, and take appropriate steps to protect them from it. There is a particular focus on the ‘priority offences’ set out in the legislation, such as child abuse, grooming and encouraging suicide; Any content that is illegal,” she said in a press release, adding: “Given the scope and diversity of services within the scope of the new laws, we do not take a ‘one-size-fits-all’ approach.” We propose some measures for all services in scope, and other measures that depend on the risks identified by the service in assessing the risks of illegal content and the size of the service.
The regulator appears to be moving relatively cautiously in discharging its new responsibilities, with the draft law on illegal content often citing a lack of data or evidence to justify initial decisions to block illegal content. no Recommending certain types of risk mitigations – such as Ofcom not proposing hash matching to detect terrorism content; Nor does it recommend using AI to detect previously unknown illegal content.
Although they suggest that such decisions could change in the future as more evidence is collected (and, undoubtedly, as available technologies change).
It also acknowledges the novelty of the endeavour, i.e. trying to regulate something as comprehensive and subjective as online safety/harm, saying it wants its first codes to be a foundation upon which to build, including through a regular review process – suggesting that the guidelines will shift and evolve as the process matures. Censorship.
“Recognizing that we are developing a new and innovative set of regulations for a sector without previous direct regulation of this kind, and that our current evidence base is currently limited in some areas, these first rules represent a foundation on which to build, through each of the subsequent iterations.” For our upcoming blogs and consultation on child protection,” Ofcom wrote. “In this context, our first proposed rules include measures aimed at sound governance and accountability for online safety, which aim to embed a culture of safety into regulatory design and iterate and improve safety systems and processes over time.”
Overall, this first step of recommendations seems reasonably uncontroversial – for example, Ofcom leans towards recommending that all U2U services should have “systems or processes designed to quickly remove illegal content of which they are aware” (note warnings); While “multi-risk” and/or “large” U2U services are presented with a more comprehensive and specific list of requirements aimed at ensuring that they have an effective system and sufficient resources for content management.
Another proposal it is consulting on is that all public search services should ensure that URLs identified as hosting CSAM are de-indexed. But it’s not making the recommendation official to ban users who share CSAM material yet — citing a lack of evidence (and inconsistent current platform policies on user bans) for not suggesting it at this point. Although the draft says it “aims to explore a recommendation on CSAM-related user bans early next year.”
Ofcom also suggests that services identified as moderate or high risk should provide users with tools that allow them to block or mute other accounts on the service. (Which should be uncontroversial to almost everyone — except perhaps Company X’s owner, Elon Musk.)
It also steers clear of recommending certain techniques that are more experimental and/or imprecise (and/or intrusive) – so while it recommends that larger and/or riskier CSAM services perform URL detection to capture links to known CSAM sites and block them, it does not He suggests they discover CSAM keywords, for example.
Other preliminary recommendations include that major search engines display predictive warnings about searches that may be related to CSAM; and providing crisis prevention information for suicide searches.
Ofcom is also suggesting that services use automated keyword detection to find and remove posts linked to the sale of stolen credentials, such as credit cards – targeting the myriad harms caused by online fraud. However, she recommends not using the same technique to detect financial promotion scams specifically, as she is concerned that this could result in too much legitimate content being picked up (such as content promoting real financial investments).
Privacy and security watchers should breathe a sigh of relief reading the draft guidance as Ofcom appears to be moving away from the most controversial element of the OSA – namely its potential impact on end-to-end encryption (E2EE).
This has been a major cause of contention with online safety legislation in the UK, with significant opposition – including from a number of tech giants and secure messaging companies. But despite vocal public criticism, the government did not amend the bill to remove E2EE from the scope of CSAM discovery procedures – instead one minister gave a verbal assurance, towards the end of the bill’s passage through Parliament, saying Ofcom could not be required to issue a survey order unless There was no “right technology”.
In the draft code, Ofcom’s recommendation that larger, riskier services use a technique called hash matching to detect CSAM avoids controversy because it only applies “in relation to the content being transmitted publicly On U2U [user-to-user] Services, where their implementation is technically feasible” (emphasis mine).
The law also states that “consistent with the restrictions in the law, it does not apply to private communications or end-to-end encrypted communications.”
Ofcom will now consult on the draft codes released today, and invite comments on its proposals.
Its guidance for digital companies on how to mitigate the risks of illegal content won’t be finalized until next fall — and compliance with those elements isn’t expected for at least three months. So there is a fairly generous introductory period in order to give digital services and platforms time to adapt to the new system.
It is also clear that the impact of the law will be graduated as Ofcom undertakes more of this ‘shadowing’ of specific details (and as any required secondary legislation is introduced).
Although some elements of the OSA – such as the information notices Ofcom can issue about in-scope service – are already enforceable duties. Services that do not comply with Ofcom information notices could face penalties.
There is also a time frame set in the OSA for in-scope services to conduct their first risk assessment on children, a key step that will help determine the type of mitigations they may need to implement. So, there is a lot of work that digital companies need to do already to pave the way for the next full system.
An Ofcom spokesperson told TechCrunch: “We want to see services take action to protect people as soon as possible, and we see no reason for them to delay taking action.” “We believe that our proposals today represent a good set of practical steps that services can take to improve user safety. However, we are consulting on these proposals and note that some elements of them may change in response to evidence provided during the consultation process.
Asked how it determined service risks, the spokesperson said: “Ofcom will decide which services we oversee, based on our own view of the size of their user base and the potential risks associated with their functionality and business model.” We have said that we will inform these services within the first 100 days after royal approval, and we will also keep this under review as our understanding of the industry evolves and new evidence becomes available.
Regarding the timeline for the Unlawful Content Code, the regulator also told us: “After we finalize our rulemaking in our regulatory statement (currently scheduled for next autumn, subject to consultation responses), we will submit it to the Secretary of State for placement in Parliament. It will come into force after 21 days after it passes through Parliament and we will be able to take enforcement action from then on and we expect the services to begin taking the necessary steps to comply with it no later than then. However, some mitigation measures may take time to implement. We will take a reasonable and proportionate approach to Decisions about when to take enforcement action take into account practical constraints that establish mitigation measures.
Ofcom also wrote in the consultation: “We will take a reasonable and proportionate approach in exercising our enforcement powers, consistent with our overall approach to implementation and recognizing the challenges facing services as they adapt to their new duties.”
“For illegal content and child safety duties, we expect only serious breaches to be prioritized for enforcement action in the very early stages of the system, to allow services a reasonable opportunity to comply. For example, this may include cases where there appears to be a very high risk of serious and ongoing harm.” For users in the UK, and for children in particular.While we will consider what is reasonable on a case-by-case basis, all services should expect full compliance within six months of the relevant safety duty coming into force.