case study

Webex by Cisco

Keeping schools and online learning safe by monitoring spaces and 1:1 chats to detect hostile or inappropriate conversations.

Proactive Moderation
1 December 2023
reading time: 3 minutes

Client

Webex by Cisco is an American company that develops and provides web conferencing, videoconferencing, and contact center as a service application. It was founded as WebEx in 1995 and taken over by Cisco Systems in 2007. Its software products portfolio includes a bunch of solutions that facilitate online collaboration for organizations.

Company type

Time on market

Location

Industry

Context

Webex products are used not only by private companies but also by many schools and online learning organizations, mainly in the United States. Within these institutions, online communication is utilized by school managers, teachers, as well as children of all ages who are most vulnerable to the detrimental impact of hate, aggression, and toxic discourse.

At Samurai Labs we addressed these challenges and developed a bot that seamlessly integrates with the Webex platform for a selected organization and detects and prevents aggressive behavior before the damage is done.

Solution

Upon integration with a platform, our bot comes preconfigured using previously gathered data. Administrators can specify which content should be flagged as inappropriate and detected and which should not. Tailored to the characteristics of an organization, such as primary schools or universities), the bot’s functionality can be customized to align with user behavior and community guidelines. This adaptability is facilitated through predefined modules that can be easily enabled or disabled as needed. Currently, Samurai Labs offers a range of linguistic modules that cover various areas of communication, including:

  • Cyberbullying
  • Blackmail
  • Suicide Declaration
  • Sexual Harassment
  • Sexual Remark
  • Soliciting Photos
  • Threats 
  • Profanity
  • School Shooting (to be implemented, in progress)

Integrated within Webex, the bot monitors all spaces across the organization swiftly detecting messages based on the aforementioned filters. Subsequently, its activity is reported on a dedicated channel accessible to teachers and administrators. Beyond automatic moderation, this system also allows them to promptly react – whether through replies, message removal, or more decisive reactions.

Over time, the organization gains the capability to fine-tune filters based on real communication data in its facility, which further enhances the effectiveness of detected violations.

Results

The app has been officially certified by Webex and is available on the Webex App Hub here. With Samurai, educational institutions can:

Keep track

Keep track of spaces and individual chats that may not be visible to teacher, aiming to detect instances of cyberbullying and harassment

Establish

Establish a secure digital environment that encourages students to engage and collaborate without fear, ensuring their well-being

Alert

Alert staff members when inappropriate conduct is identified, allowing for swift intervention

Streamline

Streamline processes for proactive interventions, enabling early responses to potential issues

Gain

Gain the ability to monitor and comprehend the overall health of conversations over time, facilitating a better understanding of communication dynamics

The purpose of Samurai is to detect and prevent violence, making it an essential asset for educational institutions that use platforms such as Webex for remote learning and communication.

Moreover, it has a positive impact on the schools’ reputation. By showcasing a reduced incidence of aggressive behavior, institutions can attract parents; preference and potentially  enhance students’ performance in international educational rankings.

Do you want to achieve such
results with us?

Case Studies

We support various businesses in their efforts to moderate online speech. Read our recent case studies and find out how we help our clients all over the world.

Developing AI system that meets the goals of the National Strategy for Preventing Veteran Suicide

Suicide rates have been historically high among Veterans, with the estimated risk being 57% higher than that of the general population. In order to investigate the suicidal tendencies among this group, we collected over 41,000 posts from VA Disability Claims Community Forums – Hadit.com.

Keeping a top gaming community safe from toxicity and cyberbullying by identifying 30% more cases of community guidelines violations

● Over 130 violations of Community Guidelines were detected by Samurai Guardian each day

● 30% more Community Guideline violations were detected by Samurai Guardian and would be automatically removed when compared to human moderators.

● Less than 3% of Community Guideline violations were removed by moderators without being detected by Samurai Guardian.

Samurai’s Username Moderation detected over 12000 toxic usernames since the launch of a new game title during the game’s first months on the market

Over 3 months we processed over 340,000 usernames coming from Freedom Games. Some of them contained two or more identical usernames. After removing them, circa 270,000 unique usernames remained, out of which circa 12,000 were detected as toxic. It means that 4.27% of all username attempts were blocked because of violating the community standards.

Our data-based analysis shows that we successfully detected 88.8% of all toxic usernames (recall) with a precision of 99.22%.




    Schedule a free consultation
    with our expert

    Take the first step to make the change in your company with just a little effort. Our representative will contact you within 24h after submitting your request.

    Chief Marketing Officer

    Chief Growth Officer