jib, Tik Tok. Ofcom, the UK regulator that now enforces the official online safety law, is preparing to size up a bigger target: search engines like Google and Bing and the role they play in delivering self-harm, suicide and other harmful content on click. button, especially for underage users.
A report commissioned by Ofcom and produced by the Network Contagion Research Institute found that major search engines including Google, Microsoft Bing, DuckDuckGo, Yahoo and AOL have become “one-click gateways” to such content by facilitating easy and quick access to web pages and images. And videos – where one in five search results revolve around self-harm keywords and are associated with more harmful content.
The research is timely and important because much of the focus on harmful online content recently has been around the influence and use of gated social media sites such as Instagram and Tik Tok. This new research is, in large part, a first step in helping Ofcom understand and gather evidence about whether there is a much greater potential threat, with open sites like Google.com attracting more than 80 billion visits per month, compared to monthly active TikTok. Users are about 1.7 billion.
“Search engines are often the starting point for people’s online experience, and we are concerned that they can act as one-click gateways to highly harmful, self-harming content,” Almudena Lara, director of online safety policy development at Ofcom, said in a report. statement. “Research services need to understand their potential risks and the effectiveness of their safeguarding measures – particularly to keep children safe online – ahead of our wide-ranging consultation scheduled for the spring.”
Ofcom said researchers analyzed about 37,000 result links across these five search engines for the report. Using common, more cryptic search terms (obscure to try to evade basic screening), they intentionally ran searches to turn off “SafeSearch” parental screening tools, mimicking the basic ways people might interact with search engines as well as the worst ways. -Case scenarios.
The results were in many respects as bad and devastating as you might imagine.
Not only do 22% of search results produce one-click links to harmful content (including instructions on various forms of self-harm), but this content represents 19% of the top links in results (and 22% of the links below the first pages of Results).
The researchers found that image searches were particularly egregious, with 50% returning harmful content to searches, followed by web pages at 28% and video at 22%. The report concludes that one reason search engines do not screen some of these elements better is that algorithms may confuse images of self-harm with medical and other legitimate media.
Obscure search terms were also better at evading scanning algorithms: making a user six times more likely to come across malicious content.
One thing that isn’t addressed in the report, but is likely to become a bigger issue over time, is the role that generative AI searches might play in this area. So far, it appears that more controls have been put in place to prevent platforms like ChatGPT from being misused for toxic purposes. The question will be whether users will figure out how to play with that, and what that might lead to.
“We are already working to build an in-depth understanding of the opportunities and risks of new and emerging technologies, so that innovation can flourish, while protecting the safety of users. Some applications of generative AI are likely to be within the scope of the Cyber Safety Act and we expect services to assess the risks related to their use when conducting their own risk assessment,” an Ofcom spokesperson told TechCrunch.
And it’s not a nightmare: about 22% of search results are also flagged as being helpful in a positive way.
The report may be used by Ofcom to get a better idea of the issue at hand, but it is also an early signal to search engine providers about what they will need to be prepared to work on. Ofcom has already been clear when it says that children will be its primary focus in enforcing online safety law. In the spring, Ofcom plans to open a consultation on the Child Protection Code of Practice, which aims to set out “practical steps research services can take to adequately protect children”.
This will include taking steps to reduce the chances of children being exposed to harmful content about sensitive topics such as suicide or eating disorders across the entire internet, including on search engines.
An Ofcom spokesperson said: “Technology companies that do not take this seriously can expect Ofcom to take appropriate action against them in the future.” This will include fines (which Ofcom has said it will only use as a last resort) and, in a worst-case scenario, court orders asking ISPs to block access to services that do not comply with the rules. There is also likely to be criminal liability for executives who oversee services that violate the rules.
So far, Google has disputed some of the report’s findings and how it describes its efforts, claiming that its parental controls do too much important work to invalidate some of these findings.
“We are fully committed to keeping people safe online,” a company spokesperson said in a statement to TechCrunch. “The Ofcom study does not reflect the safeguards we apply to Google search and reference terms that are rarely used in search. Our SafeSearch feature, which filters out harmful and shocking search results, is turned on by default for users under 18, while Turn on the SafeSearch blur setting – a feature that blurs explicit images, such as self-harm content – by default for all accounts. We also work closely with specialist organizations and charities to ensure that when people come to Google Search for information about suicide or self-harm Self or eating disorder crisis support resource panels appear at the top of the page.Microsoft and DuckDuckGo have not yet responded to a request for comment.