case study

GTA Online Discord Community

Keeping a top gaming community safe from toxicity and cyberbullying by identifying 30% more cases of community guidelines violations

Proactive Moderation
1 December 2023
reading time: 4 minutes

Client

The GTA Online Discord server is a community focused on discussing Grand Theft Auto game-related content. There are over 450 000 users and over 50 000 messages exchanged daily. Personal attacks and cases of toxic behavior tend to appear every day in high volume. The sheer number of these makes it virtually impossible to moderate manually.

Company type

Time on market

Location

Industry

Context

The community has its official community guidelines that all users agree to follow. However, in reality, with so many users and messages sent every day, it’s hard to keep track of the content that pops up there every minute.

The point of our cooperation was to verify Samurai in a large community setting and confirm whether it can imitate and fully replace human moderators in detection and removal of cyberbullying and toxic behavior.

Solution

The highly precise Samurai is designed to reduce customer attrition and user base shrinkage by mitigating negative user experience. Our software can align itself with the existing Community Guidelines which lets it replace and outdo human moderators due to its higher accuracy.

Onboarding Samurai started by preparing it to run on a given server in a way that outperforms existing moderators. The process was easy and took approximately two weeks. The strategy worked as follows:

  1. Listening to the stream of data;

  2. Automatic data analysis;

  3. Final in-depth review of proposed configuration;

Once that was done, Samurai Guardian was ready to automatically moderate channels without overlooking toxic behavior which should be removed (according to the Community Guidelines).

How did it work in practice?

We gathered a total of 1 061 132 messages over a period of nearly 3 weeks which includes:

● 21 381 messages were detected by Samurai Guardian in total, 2683 of which constitute
Community Guideline violations. The remaining messages could be of concern in other
online spaces but are not a factor for the GTAO Community

● On average there were about over 130 violations of Community Guidelines detected by
Samurai Guardian each day

Certain more benign offenses that Samurai detects like rejection or mild profanities were deemed not suitable for the server and were switched off for the purpose of the configuration prepared based on this data. GTA is a game with a focus on adult content so the milder categories do not require moderation.

SCREEN

SCREEN

These are examples taken from the observation period when Samurai was listening to the stream of data. Even though this person ultimately received a ban, they were able to create a significant amount of content that was made available on the server for a while. Using Samurai, we were able to detect this user after just a few messages. Depending on the configuration Samurai could detect such a user even after the first message and prevent the escalation.

What makes Samurai Labs better than manual moderators?

● There is no issue of under-detection.

● Samurai Guardian is able to adequately replicate moderation behavior with very few misses.

● Numerous detection categories are available, which makes it possible to customize the types of messages that should be detected depending on the community in which it operates

● Samurai detects messages that violate community standards in real-time and reacts to them before the damage is done. Samurai does not simply cure, it prevents.

● In a sample of messages that were removed but not detected by Samurai Guardian only less than 3% of these messages were actually found to be violent.

During the listening period many of the messages that Samurai Guardian would be notifying moderators about were ignored by the moderators either due to oversight or lack of reporting.

Results

130 community violations every day

 Over 130 violations of Community Guidelines were detected by Samurai Guardian each day.

30% more accurate than human moderation

30% more Community Guideline violations were detected by Samurai Guardian and would be automatically removed when compared to human moderators.

1816 more toxic behaviors detected

Samurai Guardian detected 1816 cases of toxic behavior which were not acted upon in spite of requiring the attention of moderators.

High accuracy

Less than 3% of Community Guideline violations were removed by moderators without being detected by Samurai Guardian.

Do you want to achieve such
results with us?

Case Studies

We support various businesses in their efforts to moderate online speech. Read our recent case studies and find out how we help our clients all over the world.

Developing AI system that meets the goals of the National Strategy for Preventing Veteran Suicide

Suicide rates have been historically high among Veterans, with the estimated risk being 57% higher than that of the general population. In order to investigate the suicidal tendencies among this group, we collected over 41,000 posts from VA Disability Claims Community Forums – Hadit.com.

Keeping schools and online learning safe by monitoring spaces and 1:1 chats to detect hostile or inappropriate conversations.

The purpose of Samurai is to detect and prevent violence, making it an essential asset for educational institutions that use platforms such as Webex for remote learning and communication.

Moreover, it has a positive impact on the schools’ reputation. By showcasing a reduced incidence of aggressive behavior, institutions can attract parents; preference and potentially  enhance students’ performance in international educational rankings.

Samurai’s Username Moderation detected over 12000 toxic usernames since the launch of a new game title during the game’s first months on the market

Over 3 months we processed over 340,000 usernames coming from Freedom Games. Some of them contained two or more identical usernames. After removing them, circa 270,000 unique usernames remained, out of which circa 12,000 were detected as toxic. It means that 4.27% of all username attempts were blocked because of violating the community standards.

Our data-based analysis shows that we successfully detected 88.8% of all toxic usernames (recall) with a precision of 99.22%.




    Schedule a free consultation
    with our expert

    Take the first step to make the change in your company with just a little effort. Our representative will contact you within 24h after submitting your request.

    Chief Marketing Officer

    Chief Growth Officer