Our in-house language models generate audio transcriptions optimised for detecting dangerous misinformation in multiple languages.
Advanced machine learning models detect and flag harmful language, claims, narratives and policy violations within audio.
Analysis is further enhanced by data points and crucial context from human experts, including hashtags, keywords, phrases, slogans, and slurs.
Our unique combination of local experts, data, and technology covers multiple languages, regions and areas of harm.
How we can help you
Get ahead of threats by detecting potentially harmful content in audio.
Detect and analyse text, audio and video content that has the potential to cause harm. Licence the data that powers our detection and risk analysis.
Prepare and respond to evolving narratives and developing crises.
With research analysts around the world, Kinzen gives you clarity and confidence about what to do about evolving harmful narratives.
What makes us different
Every day, Kinzen experts track evolving threats of hate and harm across multiple platforms using various monitoring tools, including Kinzen’s proprietary dashboard. These are added into a Database of Harms.
Thousands of validated and searchable data points, including hashtags, keywords, phrases, slogans, slurs and claims, are matched across local markets and languages to highlight misinformation threats and hate speech. These help train machine learning models.
Our technology sifts through large volumes of data, generating automatic classifications of harmful content and allowing our clients to prioritise and take action on high-risk harmful content before it results in real-life harm.