We are Kinzen.
Our mission is to protect the world’s public conversations from information risk.

We provide data and research to trust and safety professionals, content moderators and public policy makers, helping them get ahead - and stay ahead - of threats such as dangerous misinformation, hateful content, violent content, violent extremism and dangerous organisations.

We use a blend of human expertise and machine learning to provide early-warning of the spread of harmful content in multiple languages. Our team has developed unique technology which helps editors review large volumes of content in multiple formats, including text, video, audio and images. We have developed a particular expertise in the moderation of podcasts.

A photo of Kinzen's founders, Mark Little and Áine Kerr.

Our Work

We help our clients make more precise and consistent decisions about evolving online threats to real-world safety. We do this by focusing on harmful content with the greatest capacity to create violence, abuse or civil unrest, and performing the following tasks:

Prioritise

Prioritise countries and languages in which clients have blindspots and where cultural nuance is critical.

Decode

Decode the cultural and linguistic nuances which distinguish harmful content from place to place.

Prepare

Prepare for events during which dangerous misinformation could undermine electoral integrity, provoke violence or promote conflict.

Pre-empt

Pre-empt the spread of international misinformation narratives which threaten public health, such as anti-vaccine campaigns.

Analyse

Analyse the evolution of persistent campaigns of hateful speech, such as antisemitism.

Anticipate

Anticipate the emergence of campaigns of violent rhetoric based on identity.

Learn More

Our Team

Built on quality data

Our team is a uniquely experienced group of engineers, scientists, designers and developers.

We also employ a network of experts who have deep knowledge and lived experience of cultural differences and nuances around the globe but seek to apply universal principles in identifying harmful content. They are journalists, researchers, authors and experts in open source intelligence gathering, with the common goal of supporting consistent standards of moderation across multiple languages, cultures and events.

Our Principles

Our principles start with the two core foundations of journalism: fairness and impartiality. These concepts are at the very core of our work, influencing every aspect of our culture, our recruitment and our conduct.

Learn More