tl;dr: We make it easy to take care of safety so that you can focus on the product. Intrinsic integrates policy management, data engineering, moderator workflow management, and content understanding under one platform. Ping us for a demo!
Hey everyone! We’re Karine and Michael, the team behind Intrinsic.
We’re working to help connect safety teams with the best tools available to prevent abusive behaviour and tell better stories. We both come from Apple and Discord, where we worked on the Trust & Safety machine learning and data engineering infrastructure that was pivotal in protecting both platforms from large-scale abuse. We believe that a safer internet for everyone starts with more accessible safety tools.
Trust & Safety engineers are tasked with maintaining platform integrity and user safety in an era of evolving fake accounts, illegal content, hate speech, misinformation, and child safety concerns. Nobody wants this stuff on their platforms, but getting up and running with industry best practices is challenging.
Currently, there are two huge barriers to shipping safety-by-default products:
As a result, safety engineers and data scientists are caught in an endless cycle of blindsided firefighting and doing tedious integrations.
Intrinsic is a platform for Trust & Safety with a focus on scalability and composability of processes to help a company move faster in responding to evolving threats:
Trust & Safety and Fraud are two problem domains that require rapidly evolving AI systems that start with machine learning and rules (proactive detection) and end with human decisioning (content moderation). We believe that by streamlining this process and closing the feedback loop — the applications go far beyond Trust & Safety and into any domain that benefits from a rapidly evolving response.
If you know anyone who’s ever had issues with the following,
Ping Karine, Michael or founders@withintrinsic.com, we would love to schedule a demo!