Samurai’s Username

A guaranteed way of preventing
toxicity at username creation
find out more Request a free username analysis

How does Username
Moderation work?

Our findings indicate that individuals with toxic usernames are about 2.2 times more likely to engage in violent activity and to have their accounts terminated by moderators. Unmoderated usernames have the potential to validate harmful behavior and inspire others to do the same, but blocking offensive usernames ensures that nobody sets the wrong tone for interactions within your community before they even begin to chat.

It is the best way to protect your communities from harmful, disruptive, and unauthorized identities. We utilize our proactive neuro-symbolic method to look for the initial attempt to create a toxic username and to uncover hidden meanings in usernames.

Samurai moderates offensive identities that could alienate other users or encourage more abuse, while also making sure your communities are safe from inventive hostile attacks.

Benefits of using Samurai’s Username

Reduction of
aggressive actors in
your community

Lower churn related to

Higher MRR

Higher health scores
of your online
community causing
longer retention

Compliance with the latest government regulations all
over the world (e.g. UK Online Safety Act)

Lower costs thanks to a
healthy community
from the very beginning

Full working capabilities
within English, Spanish
and Polish and it’s
different dialects

Easy and fast
implementation – ready
to go within 24 hours of
you making a decision

Extensive experience in the gaming industry – engine originally created based
on data from some of the biggest online games

Want to check out our engine on your own data first?
Request a free username analysis

Toxic users can be very creative – we are one step ahead

Watch our demo and see how
our Username Moderation

You can pass any username to the Username Moderation API in order to subject it to linguistic analysis to identify any disruptive, offensive, sexual or otherwise inappropriate language. With Username Moderation you can protect your communities and eliminate the risk of a toxic username setting the tone for the ensuing discourse within the community.

Samurai Labs uses a neuro-symbolic approach to dissect each username into its constituent parts, identify disruptive content and provide detailed information on what makes the username toxic. Crucially, Username Moderation can handle such text manipulation techniques as:

  • leetspeak – replacing letters with digits or symbols
  • spelling the username backwards
  • swapping the initial letters of two words
  • other types of self-censorship
  • intentional misspellings
  • flanking the username with various characters
  • letter multiplication
  • use of very similar letters from other alphabets, e.g. Russian Cyrillic у, с, or р, which, when used in an English text, escape traditional filters based on the Latin alphabet
  • use of letter variants with diacritics
  • and any combination thereof

This way, Username Moderation is able to decipher even deeply hidden offensive meanings. Then, the Username Moderation classifies toxic content according to one of the nine toxicity categories:

this category covers a wide range of vocabulary almost universally considered offensive, includes, but is not limited to sexist, racist and queerphobic language, for example: D34thToFurr1es (Death to Furries), sucker90000, AutistikUncle
words and phrases related to body parts, sexual activities and fetishes, for example: Whit3Long5chlong (White Long Schlong), CallMeSexGod, HornyItalian
collection of vulgar words and phrases, common profanities and curse words, for example: FolyHuck (Holy Fuck), FackThis, HOLISSSHHHIIITT (Holy Shit)
this category is made up of words and phrases that relate to serious and unambiguous violent acts, known serial killers and torture methods, for example: xXP3D0_B34RXx (pedo bear), FreeTedKaczynski, Homicide76
vocabulary related to politics – controversial or condemned political figures, problematic world events or extremist worldviews, for example: Vl4dim1r_Put1n (Vladimir Putin), lovenazi, HolocaustWishes
words and phrases related to offending on the grounds of religion, taboos or religious extremism, for example: Un3xp3ctdJ1h4d (unexpected jihad), AllahuAkbar2nd, JesusHas1BalSack
this category includes drug and medicine related vocabulary as well as slang terms related to drugs, drug use and drug users, for example: m0rfin4 (morfina), igotcrack, BongMaster
vocabulary related to illnesses and medical procedures, mostly dealing with taboos or unpleasant imagery, for example: HIVdestruction, abortion survivor, DiarreahYumyYumy
a mixture of scatological humor, mildly violent or gaming-related violent vocabulary and other milder terms that are otherwise difficult to categorize, for example: Suic1deboy (suicide boy), IFartedAgain, Pervert_Pete

Case Studies

We support various businesses in their efforts to moderate online speech. Read our recent case studies and find out how we help our clients all over the world.

Developing AI system that meets the goals of the National Strategy for Preventing Veteran Suicide

Suicide rates have been historically high among Veterans, with the estimated risk being 57% higher than that of the general population. In order to investigate the suicidal tendencies among this group, we collected over 41,000 posts from VA Disability Claims Community Forums –

Keeping schools and online learning safe by monitoring spaces and 1:1 chats to detect hostile or inappropriate conversations.

The purpose of Samurai is to detect and prevent violence, making it an essential asset for educational institutions that use platforms such as Webex for remote learning and communication.

Moreover, it has a positive impact on the schools’ reputation. By showcasing a reduced incidence of aggressive behavior, institutions can attract parents; preference and potentially  enhance students’ performance in international educational rankings.

Keeping a top gaming community safe from toxicity and cyberbullying by identifying 30% more cases of community guidelines violations

● Over 130 violations of Community Guidelines were detected by Samurai Guardian each day

● 30% more Community Guideline violations were detected by Samurai Guardian and would be automatically removed when compared to human moderators.

● Less than 3% of Community Guideline violations were removed by moderators without being detected by Samurai Guardian.

Samurai’s Username Moderation detected over 12000 toxic usernames since the launch of a new game title during the game’s first months on the market

Over 3 months we processed over 340,000 usernames coming from Freedom Games. Some of them contained two or more identical usernames. After removing them, circa 270,000 unique usernames remained, out of which circa 12,000 were detected as toxic. It means that 4.27% of all username attempts were blocked because of violating the community standards.

Our data-based analysis shows that we successfully detected 88.8% of all toxic usernames (recall) with a precision of 99.22%.

Conducting counter-speech interventions powered by AI to reduce toxicity and community violations on Reddit

The main objective of our experiment was to test whether the level of cyberviolence on Reddit can be significantly decreased by community-driven, counter-speech interventions conducted by users in partnership with Artificial Intelligence.

Preparing a toxicity analysis of the official Discord communities for two adult games and preparing a bot for autonomous proactive moderation adapted to these adult communities.

Even if your game does not have a huge community focused on Discord, or you have a community that allows a lot of content that is not allowed on other servers – Samurai can help you. Thanks to toxicity verification and automatic moderation, you can effectively increase player retention and satisfaction when playing the game.

Bringing real-time counterspeech to the darkest corners of the web in order to evaluate students after completing the online course

Samurai Labs together with INACH created an exercise for the participants of a counter speech course in which AI delivered cases of hate speech in real-time on 4 chan with a high precision. Thanks to its ability to scan millions of conversations within seconds, the most important needs of the course were addressed and the following value was created:

  • The possibility of evaluating the knowledge of trainees on real cases, straight after completing the course
  • Enabling people to test themselves and make real impact in the digital world with the acquired knowledge
  • The ability to positively model discussions on the web without the need to manually search for hate speech threads, saving time and hussle

AI Monitoring and Preventing Anti-Immigrant Hate on Social Media After Brexit Call

Samurai was engaged to develop AI models and analyze posts on Twitter and YouTube to detect violent speech acts against Poles. At the latter stage of the project Samurai’s data aim to power up HateLab custom dashboard for better measuring violence before and after Brexit, alerting authorities to trends and immediate threats.

Reducing personal attacks and cultivating healthier communities with autonomous counterspeech on Reddit

After analyzing the data, it turned out that without the help of penalties and bans, James reduced the subreddit’s aggression level by 19%. Moreover, the antisocial activity of those who received the interventions decreased in other Reddit groups as well.

There is reason to suspect that the interventions had a positive impact on overall user behavior rather than just motivating the audience to find another place to attack others.

Autonomous removal of offensive content from communities without exposing moderators to this stressful material and creating healthier and more positive communities.

Samurai Labs and Khoros faced the challenge of solving the issue and creating a system that would protect not only community’s users from harmful content, but also take care of the mental health of moderators who are most exposed.

Samurai Labs left the
competition behind

Samurai Labs uses a proactive approach tailored specifically to usernames rather than general language toxicity detection. Our models were trained primarily on in-game usernames, which makes us the most precise solution for the gaming industry. There are dozens of methods people utilize to avoid detection which means that tackling them requires industry knowledge and purpose-built solutions. Our product easily deals with typos or other simple attempts at obfuscation but also with users’ more creative approaches to creating a toxic username.

Specialized knowledge is a necessity for effective detection of offensive usernames, which is an issue for some of the most popular AI models in the world, which have huge problems detecting more advanced, short names as they take a generalized approach.

Let’s start using Username
Moderation in your company

Implementing Username Moderation is quick and simple. In most cases, our product can be implemented within 24 hours of making the decision to cooperate, regardless of the size of the user base or industry. After implementation, our product is completely maintenance-free, you have access to a customized dashboard with all necessary data. Also, if you have any questions or additional needs, our customer service department is available to you without any limits.

    Schedule a free consultation
    with our expert

    Take the first step to make the change in your company with just a little effort. Our representative will contact you within 24h after submitting your request.

    Chief Marketing Officer

    Chief Growth Officer

    Frequently asked questions

    Have you not seen an answer to your question yet? Check our FAQ below
    or schedule a call using the contact form above.

    Do you provide any username scoring? 

    While the API response does not provide scores it provides two parameters that can be used for customisation – the severity of the detected offensive content within the username and categories (e.g. ‘profanity’, ‘sexual’, ‘offensive’, ‘political’, and more) describing the type of offensive content detected. This makes it possible to target specific problem areas in more detail than general, broader solutions

    Can I try out your product before purchasing it?

    Yes, you can. We can provide a free analysis of your usernames. The only thing you have to do is to schedule a call with us and then provide us with a sample of your username database. As a result, you will receive information about the number of toxic usernames in your community and an overview of the kinds of toxicity we detected.

    Can I implement Username Moderation before the launch of my product?

    Yes, you can. Our Username Moderation is often used before launch to act before the damage is done. It’s an even better solution than implementing it in an existing community because you prevent toxic behavior instead of reacting to it post factum.

    Interested in other Samurai Labs solutions?