A few years ago, Karen Mailata and Michael Lin met while working on the fraud and algorithmic risk engineering team at Apple. Engineers Melata and Lin were involved in helping address online abuse issues including spam, bots, account security and developer fraud for Apple’s growing customer base.
Despite their efforts to develop new models to keep pace with evolving patterns of abuse, Malata and Lin felt they were falling behind — and stopped rebuilding essential elements of their trust and safety infrastructure.
“As regulations put more scrutiny on teams to focus their somewhat tailored responses around trust and safety, we saw a real opportunity for us to help modernize this industry and help build a safer internet for everyone,” Melata told TechCrunch in an email interview. “We dreamed of a system that could magically adapt as quickly as the abuse itself.”
So Malata and Lin co-founded it substantial, a startup that aims to provide safety teams with the tools needed to prevent abusive behavior on their products. Intrinsic recently raised $3.1 million in a seed round involving Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.
Intrinsic’s platform is designed to manage user-generated content and artificial intelligence, providing the infrastructure to enable customers – mainly social media companies and e-commerce marketplaces – to detect and take action on content that violates their policies. Intrinsic focuses on safety product integration, and automatically coordinates tasks like blocking users and flagging content for review.
“Intrinsic is a fully customizable AI content management platform,” Melata said. “For example, Intrinsic can help a publishing company that produces marketing materials avoid providing financial advice, which carries legal liabilities. Or we can help marketplaces discover listings like brass knuckles, which are illegal in California but not Texas.”
Melata explains that there are no ready-made classifiers for these types of granular categories, and that even a well-resourced trust and safety team would need several weeks—or even months—of engineering time to add new in-house automated detection categories.
When asked about competing platforms like Spectrum Labs, Azure, and Cinder (which is almost a direct competitor), Mailata said he sees Intrinsic standing out from the crowd in (1) explainability and (2) greatly extensible tools. He explained that Intrinsic allows customers to “ask” them about mistakes they make in content moderation decisions and provides an explanation of their reasons. The platform also hosts manual review and tagging tools that allow customers to fine-tune moderation models on their own data.
“Most traditional trust and safety solutions are not resilient and are not designed to evolve with abuse,” Mallata said. “Resource-constrained Trust and Safety teams are seeking vendor assistance now more than ever and are looking to reduce moderation costs while maintaining high safety standards.”
Without a third-party audit, it’s difficult to determine how accurate a particular vendor’s moderation models are — and whether they’re vulnerable to types of breaches. Biases That plagues content moderation models elsewhere. Intrinsic, however, appears to be gaining traction thanks to “large, well-established” enterprise clients who are signing contracts in the “six-figure” range on average.
Intrinsic’s near-term plans are to expand the size of its three-person team and expand its moderation technology to include not only text and images but also video and audio.
“The broader slowdown in technology is driving increased interest in automation for trust and safety, putting Intrinsec in a unique position,” Melata said. “COOs care about reducing costs. Chief compliance officers care about reducing risk. Intrinsic helps with both. We’re cheaper, faster, and catch more violations than existing vendors or similar in-house solutions.”