case study

Freedom Games

Samurai’s Username Moderation detected over 12000 toxic usernames since the launch of a new game title during the game’s first months on the market

Username Moderation
1 December 2023
reading time: 3 minutes

Client

Freedom Games is a game publishing company, founded in Alabama, USA. They are committed to helping gamers by creating amazing gaming experiences and aiding developers by promoting their games to new players. Freedom Games assists creators in making their games a reality for players all around the world. The company is staffed by a team of passionate gamers with decades of combined expertise in the games business.

Company type

Time on market

Location

Industry

Context

In May 2023, Freedom Games released Against All Odds, their new online game. From the outset, the company wanted to create a community free from toxicity starting with the usernames of the players.

Since the game is publicly available on Steam and acquires up to several hundred new users a day, it was crucial to prevent toxic names from the very beginning, rather than spending a lot of resources and time on manual moderation of the community later.

Solution

Samurai’s Username Moderation is the best method for protecting online game communities against disruptive, offensive, and harmful game identities. In order to locate the original attempt to create a toxic username and to uncover hidden meanings in usernames, we employ our proactive AI Neuro-Symbolic approach, which combines LLMs and symbolic reasoning.

Samurai protects by censoring hostile usernames that can alienate other users and encourage more abuse.

Freedom Games utilized our API by integrating it with their registration and username creation process. Thanks to that, they created a healthier environment, free from toxic users from the very beginning of the game.

Our Username Moderation service remains active and verifies incoming Against All Odds usernames constantly since May 2023.

Results

Over 3 months we processed over 340,000 usernames coming from Freedom Games. Some of them contained two or more identical usernames. After removing them, circa 270,000 unique usernames remained, out of which circa 12,000 were detected as toxic. It means that 4.27% of all username attempts were blocked because of violating the community standards.

Our data-based analysis shows that we successfully detected 88.8% of all toxic usernames (recall) with a precision of 99.22%.

Within the data we found the full variety of the types of toxicity we detect in usernames, including: sexual, offensive, profanity, political, drugs, violence, medical, religion and other (inappropriate). By detecting such a wide range of toxic content, Samurai’s Username Moderation is able to protect not only against vulgar language but also against offensive content inappropriate for kids or directed at minorities.

88.80%

Samurai Labs successfully detected 88.8% of all toxic usernames

99.22%

Samurai detected toxic usernames with a precision of 99.22%

340,000 usernames

In 3 months we processed over 340,000 usernames

Do you want to achieve such
results with us?

Case Studies

We support various businesses in their efforts to moderate online speech. Read our recent case studies and find out how we help our clients all over the world.

Developing AI system that meets the goals of the National Strategy for Preventing Veteran Suicide

Suicide rates have been historically high among Veterans, with the estimated risk being 57% higher than that of the general population. In order to investigate the suicidal tendencies among this group, we collected over 41,000 posts from VA Disability Claims Community Forums – Hadit.com.

Keeping schools and online learning safe by monitoring spaces and 1:1 chats to detect hostile or inappropriate conversations.

The purpose of Samurai is to detect and prevent violence, making it an essential asset for educational institutions that use platforms such as Webex for remote learning and communication.

Moreover, it has a positive impact on the schools’ reputation. By showcasing a reduced incidence of aggressive behavior, institutions can attract parents; preference and potentially  enhance students’ performance in international educational rankings.

Keeping a top gaming community safe from toxicity and cyberbullying by identifying 30% more cases of community guidelines violations

● Over 130 violations of Community Guidelines were detected by Samurai Guardian each day

● 30% more Community Guideline violations were detected by Samurai Guardian and would be automatically removed when compared to human moderators.

● Less than 3% of Community Guideline violations were removed by moderators without being detected by Samurai Guardian.




    Schedule a free consultation
    with our expert

    Take the first step to make the change in your company with just a little effort. Our representative will contact you within 24h after submitting your request.

    Chief Marketing Officer

    Chief Growth Officer