case study

Movie Games

Preparing a toxicity analysis of the official Discord communities for two adult games and preparing a bot for autonomous proactive moderation adapted to these adult communities.

Proactive Moderation / Toxicity Radar for Brands
1 December 2023
reading time: 5 minutes

Client

Movie Games is a Polish global video game publisher, listed on the main floor of the Warsaw Stock Exchange. The company was founded in 2016 by veterans from the video game industry and experienced entrepreneurs.

The Movie Games portfolio includes adventure games (“Lust for Darkness”, “Lust from Beyond”, “Lust from Beyond: M Edition”), simulators (“Drug Dealer Simulator”, “Gas Station Simulator”, “MythBusters: The Game” and “Alaskan Truck Simulator”), as well as titles in other genres like strategy or city builder.

Company type

Time on market

Location

Industry

Context

Movie Games approached us in 2022 to analyze the toxicity of players’ communication on the Discord servers related to their games: “Lust from Beyond” and “Drug Dealer Simulator”. Both games are intended for older audiences due to their themes and gameplay. Lust from Beyond contains a lot of erotic content, while Drug Dealer Simulator allows you to play the role of a drug dealer selling illegal products. 

Due to the specificity of these games, the challenge was to indicate what types of content are allowed and should not be moderated and what types of content do actually violate community rules. Detected cases that should be 100% moderated in other communities may be allowed here.

As a result of our work, we prepared a toxicity report and prepared our bot to moderate these Discord servers autonomously.

Solution

In our analysis, we mainly focused on the two games that had the largest community around them. For each of these servers, we have prepared separate S4M configurations ready to improve existing standards for moderating toxic content and improve moderation following Discord Community Guidelines.

Due to the nature of Lust From Beyond, the server for it is NSFW and therefore any neutral discussions about sex are permitted. At the same time, this means that special attention must be paid to potential sexual harassment and other related toxic content. Other content that we consider important from a moderation perspective are personal attacks and threats – they violate both server rules and Discord’s Community Guidelines.

During the listen-in session where we gathered data that informed the bot’s configurations, over two hundred messages with strong sexual undertones were detected (Strong Sexual Remark). Among them, we selected a subset of messages that required special attention as they were cases of sexual harassment. For these messages, we prepared a solution that allows S4M to automatically delete similar content. For the remaining messages the configuration reports them to moderators so that they can decide whether these messages should be moderated and how. These are messages that may still require intervention depending on the broader context, but it is not unambiguous that they violate server rules or go beyond generally accepted norms.

Moreover, in the data we received, we focused on the categories of threats (Threat) and personal attacks (Personal Attack). These are not categories that appear in large numbers in the data – in total, we detected 52 personal attacks and 4 threats. Still, they constitute types of cyberbullying that require attention on both the level of the server policy and Discord’s Community Guidelines. Therefore, in the configuration we suggest, personal attacks directed at the interlocutor and threats at the Severe Violence level are automatically removed, and those at the Violence level are reported for manual moderation.

While listening-in to the Drug Dealer Simulator server, we noticed that a form of automatic moderation focused on removing simple vulgarisms had already been introduced there. We came across 10 messages containing the word ‘fuck’ which were deleted within a second of being sent. The configuration proposed by us replicates this behavior and additionally detects and reacts by automatically removing attempts to bypass this solution (‘fking’, ‘fcking’, ‘fck’) and other vulgarisms of similar severity found in other unmoderated messages.

Preparing a bot for these communities proved an interesting challenge as the line between acceptable and offensive content was really thin. It was satisfying to put the adaptability of our modules to use to deliver a more complex solution. Opportunities like this make it possible to show that we can cater to the moderation needs of any community, regardless of its demographics.” Ida Dziublewska, Product Owner

Results

Even if your game does not have a huge community focused on Discord, or you have a community that allows a lot of content that is not allowed on other servers – Samurai can help you. Thanks to toxicity verification and automatic moderation, you can effectively increase player retention and satisfaction when playing the game.

Samurai Labs is able to prepare a bot that will be tailored to the needs of specific communities, even those aimed specifically at adults. Just because a community is centered around NSFW content doesn’t mean it needs no moderation.

We collected a total of 12,754 messages. During the listen-in period lasting almost two months, we detected almost 250 messages that required the attention of moderators or automatic moderation.

Autonomous work

S4M is able to automatically delete messages in a way that replicates and exceeds existing solutions

More detection

S4M detects cases that have been previously missed by moderators and prepares servers for safe and peaceful growth

No difference between bot and human

There were only 2 cases of messages deleted by moderators that were not detected by S4M – the problem of detecting too little toxic content does not occur

High standard of moderation

S4M can adapt to server rules and support servers in complying with Discord Community Guidelines, ensuring a high standard of moderation

Do you want to achieve such
results with us?

Case Studies

We support various businesses in their efforts to moderate online speech. Read our recent case studies and find out how we help our clients all over the world.

Developing AI system that meets the goals of the National Strategy for Preventing Veteran Suicide

Suicide rates have been historically high among Veterans, with the estimated risk being 57% higher than that of the general population. In order to investigate the suicidal tendencies among this group, we collected over 41,000 posts from VA Disability Claims Community Forums – Hadit.com.

Keeping schools and online learning safe by monitoring spaces and 1:1 chats to detect hostile or inappropriate conversations.

The purpose of Samurai is to detect and prevent violence, making it an essential asset for educational institutions that use platforms such as Webex for remote learning and communication.

Moreover, it has a positive impact on the schools’ reputation. By showcasing a reduced incidence of aggressive behavior, institutions can attract parents; preference and potentially  enhance students’ performance in international educational rankings.

Keeping a top gaming community safe from toxicity and cyberbullying by identifying 30% more cases of community guidelines violations

● Over 130 violations of Community Guidelines were detected by Samurai Guardian each day

● 30% more Community Guideline violations were detected by Samurai Guardian and would be automatically removed when compared to human moderators.

● Less than 3% of Community Guideline violations were removed by moderators without being detected by Samurai Guardian.




    Schedule a free consultation
    with our expert

    Take the first step to make the change in your company with just a little effort. Our representative will contact you within 24h after submitting your request.

    Chief Marketing Officer

    Chief Growth Officer