case study


Autonomous removal of offensive content from communities without exposing moderators to this stressful material and creating healthier and more positive communities.

Proactive Moderation
1 December 2023
reading time: 3 minutes


Khoros is a globally recognized platform for digital-first customer engagement. boasting over two decades of experience in revolutionizing CX. Their platform combines software and services for digital care, messaging, chat, social marketing, and online communities. Khoros enables brands to harness the power of human connection across all digital interactions facilitating over 500 million daily digital interactions and utilizing AI to turn these interactions into actionable insights.

Company type

Time on market




Khoros provides content moderation services to some of the world’s largest enterprises,monitoring brand-related communication on social media platforms, internal communication platforms or user support systems. Most communication moderation is done manually by moderators who verify the compliance of communication with platform guidelines.

However, this manual approach generates high costs related to the recruitment and employees retention or protection of their mental health due to frequent exposure to distressing content. Moreover, manual moderation is prone to errors, with no guarantee of detecting all inappropriate contentin a timely manner, potentially causing irreversible harm to the entire community.


Samurai Labs and Khoros faced the challenge of solving the issue and creating a system that would protect not only community’s users from harmful content, but also take care of the mental health of moderators who are most exposed.

In press release from 2021 Khoros officials said:

We partnered with Samurai Labs — an AI firm dedicated to preventing online violence — to detect potentially offensive content, and reject or approve posts. Offensive content is then automatically removed from communities without exposing moderators to this stressful material, ultimately creating healthier and more positive communities.

Leveraging the Khoros API, this system safeguards all user’s touchpoints, including boards, forums, chats, comment sections, and interactions with consultants. Samurai Labs protects these areas by automatically detecting and removing content violating community standards. The integration with incoming data empowers administrators to create customizable workflows specifying when interventions occur and their nature, optimizing them accordingly.

One key advantage of our solution is the ability to adaptability to community-specific nuances, moving beyond reliance on specific keywords. For instance, Samurai Labs’ system understand when the word “fuck” has beenused positively, significantly reducing the number of false positives, which in many of our case studies is less than just a few percent.

Developing an automatic detection system for enterprises also necessitates meeting stringent security requirements. As part of our service, we operate in accordance with highest security standards which were audited by Khoros. Future plans include undergoing an independent security audit (SOC2).


The implementation of Samurai Labs into companies providing manual moderation and taking care of user engagement on social platforms yields numerous benefits:

Cost Efficiency

offers an automated that reduces the operating costs of moderation (e.g. recruitment, onboarding and employee retention)


enhances precision compared to manual moderation in detecting violations of the regulations allowing swift action

Mental Health Care

prioritizes the mental health of moderators by minimizing their exposure to the most drastic cases without additional costs

Time Savings

reduces moderators’ time spent on overt violations of community standards enabling them to focus on more complex cases

Resource Allocation

the time saved by moderators and their experience can boost R&D projects

Semantic Understanding

the ability to understand semantic nuances of communication, such as detecting humor or different meanings of the same keywords, drastically reducing false positives

Do you want to achieve such
results with us?

Case Studies

We support various businesses in their efforts to moderate online speech. Read our recent case studies and find out how we help our clients all over the world.

Developing AI system that meets the goals of the National Strategy for Preventing Veteran Suicide

Suicide rates have been historically high among Veterans, with the estimated risk being 57% higher than that of the general population. In order to investigate the suicidal tendencies among this group, we collected over 41,000 posts from VA Disability Claims Community Forums –

Keeping schools and online learning safe by monitoring spaces and 1:1 chats to detect hostile or inappropriate conversations.

The purpose of Samurai is to detect and prevent violence, making it an essential asset for educational institutions that use platforms such as Webex for remote learning and communication.

Moreover, it has a positive impact on the schools’ reputation. By showcasing a reduced incidence of aggressive behavior, institutions can attract parents; preference and potentially  enhance students’ performance in international educational rankings.

Keeping a top gaming community safe from toxicity and cyberbullying by identifying 30% more cases of community guidelines violations

● Over 130 violations of Community Guidelines were detected by Samurai Guardian each day

● 30% more Community Guideline violations were detected by Samurai Guardian and would be automatically removed when compared to human moderators.

● Less than 3% of Community Guideline violations were removed by moderators without being detected by Samurai Guardian.

    Schedule a free consultation
    with our expert

    Take the first step to make the change in your company with just a little effort. Our representative will contact you within 24h after submitting your request.

    Chief Marketing Officer

    Chief Growth Officer