Kinzen helps companies that host content to improve their response to highly complex moderation challenges. We support the wide variety of teams inside these companies who are promoting trust and ensuring safety within online communities. This includes (but is not limited to) policy, enforcement, threat assessment, risk response and curation experts, and the product teams who support them.
Kinzen also works with content moderation service providers engaged by large technology companies, and consults with public policy makers seeking to better understand and respond to harmful content.
Kinzen does not make the decisions about what content to moderate. Instead, we support our partners as they develop and enforce policies best suited to the communities they seek to protect.
Our goal is to help our clients make more precise and consistent decisions about evolving online threats to real-world safety. We do this by focusing on harmful content with the greatest capacity to create violence, abuse or civil unrest, and performing the following tasks:
Technology plays a critical role at Kinzen, helping scale our work through a constantly improving feedback loop between human analysts and artificial intelligence. Kinzen experts and analysts gather and label data which is then used to train machine learning models. This helps Kinzen analysts improve the detection and understanding of harmful content across languages and platforms.