Content moderation remains a controversial topic in the world of online media. New regulations and A public matter You will likely keep it as a priority for many years to come. but Weaponized artificial intelligence Other technological developments make it more difficult to deal with. A startup called from Cambridge, England Unitary artificial intelligence She believes she has found a better way to meet the moderation challenge – using a “multi-modal” approach to help analyze content in the most complex medium of all: video.
Unitary today announces $15 million in funding to capitalize on its market momentum. The Series A – led by top European venture capital firm Creandum, with also participation from Paladin Capital Group and Plural – comes as Unitary’s business grows. The number of videos it categorizes jumped this year to 6 million per day from 2 million (covering billions of images) and the platform is now adding more languages other than English. She declined to reveal the names of the clients but says the ARR is now in the millions.
Unitary is using the funding to expand into more regions and hire more talent. The unitary does not disclose its rating. It has previously raised less than $2 million and more 10 million dollars In seed financing; Other investors include the likes of former Meta CEO Carolyn Iverson.
There have been dozens of startups in recent years using different aspects of AI to build content moderation tools.
And when you think about it, the sheer scale of the challenge in video is a fitting application for that. No army of people alone would be able to analyze the tens and hundreds of zettabytes of data generated and shared on platforms like YouTube, Facebook, Reddit, or TikTok — not to mention dating sites, gaming platforms, video conferencing tools, and other places where videos appear, and in the aggregate make up more Of 80% of all internet traffic.
This angle is also what interests investors. “In the online world, there is a tremendous need for a technology-driven approach to identifying harmful content,” Christopher Stead, chief investment officer at Paladin Capital Group, said in a statement.
However, it is a crowded space. OpenAI, Microsoft (using their own AI, not OpenAI), Hive, Active Fence / Spectrum Labs, Oterlu (now part of Reddit), Sentropy (now part of Discord), Amazon’s Identify These are just a few of the many out there in use.
From Unitary AI’s perspective, current tools aren’t as effective as they could be when it comes to video. This is because tools have been created so far to typically focus on analyzing data of one type or another — for example, text, audio, or image — but not all together, simultaneously. This leads to a lot of false flags (or conversely no flags).
“What’s innovative about Unitary is that we have true multimedia models,” said CEO Sascha Hako, who co-founded the company with CTO James Thewlis. “Instead of just analyzing a series of frames, in order to understand the nuances and whether the video is there or not [for example] Artistic or violent, you should be able to mimic the way a human moderator watches video. We do this by analyzing text, audio and visuals.
Clients set their own criteria for what they want to moderate (or not), and Haco said they will typically use Unitary alongside a human team, which in turn will now have to do less work and face less stress.
“Multi-modal” moderation seems very clear; Why wasn’t this done before?
One reason is that “you can make a lot of progress using just the older, visual model,” Haku said. However, this means that there is a gap in the market for growth.
The reality is that content moderation challenges have continued to target social platforms, gaming companies, and other digital channels where media is shared by users. Recently, social media companies He motioned to move away Among the strongest policies of moderation are: Fact checking organizations are Loss of momentum; Questions remain about Professional ethics Moderation when it comes to harmful content. The desire to fight has diminished.
But Haco has an interesting track record when it comes to working on difficult and ambiguous topics. Before unitary AI, Hakko, who holds a PhD in quantum physics, worked on black hole research with Stephen Hawking. She was there when that team took the first image of a black hole, using the Event Horizon Telescope, but she had a desire to shift her focus to working on terrestrial problems, which can be as difficult to understand as the space-time gravitational monster.
Her “surprise”, she said, was that there were a lot of products in moderate content, a lot of hype, but nothing yet evenly matched what customers actually wanted.
Meanwhile, Thewlis’ expertise is being put into practice directly at Unitary: he also holds a PhD in computer vision from the University of Oxford, where his specialty was “visual understanding methods with less manual annotation.”
(I think the word “Unitary” is a double reference. The startup is working to standardize a number of different parameters to better understand videos. But it may also be a reference to Haco’s previous career: unitary operators are used in describing quantum states, which in themselves are complex and cannot Predictable – just like online content and humans.)
Multimodal research in artificial intelligence has been ongoing for years. But it seems we are entering an era where we will start to see more applications of this concept. Case in point: Last week Meta referenced multimodal AI several times in its Connect keynote previewing new AI assistant tools. Unitary thus extends this interesting intersection between cutting-edge research and real-world application.
“We first met Sasha and James two years ago and were incredibly impressed,” Gemma Plomin, Creandom director and board member, said in a statement. “Unitary has emerged as clear leaders in the important field of AI for content safety, and we are very excited to support this exceptional team as they continue to accelerate and innovate in content classification technology.”
“Since the beginning, Unitary has had some of the most powerful AI technology to classify malicious content. The company has already accelerated this year to 7 figures of ARR, which is almost unheard of in this world,” said Ian Hogarth, a partner at Plural and also a board member. Early stage of the journey.