Username Filtering Moderation Internal Tool

In 2023, I led the design process of Scratch's new username-filtering tool to ensure that all created usernames were safe and aligned with the Scratch Community Guidelines, while also making it easier for Scratch moderation teams to edit safety filters in the existing filter library.

Business Goal & Success Measures

Deliver a new moderation platform iteratively, addressing community safety and operational efficiency.

Deliver a feasible tool that can be implemented within the targeted time frame, where 95% of requests are addressed by the moderation team and 5% or fewer are handled by engineers.

Context: The Scratch Community

Scratch is the largest worldwide coding community for young people and a creative learning platform with over 15 years of history. Scratch has more than 100 million users, with hundreds of new users creating accounts every day. Because Scratch's target audience is children, we need to ensure it is a safe place for them.

One of the many ways to reduce risks to the community is requiring that users do not use their personal information nor include harmful content when creating usernames. This is done using a filter tool on the backend that detects words that are not in alignment with these safety guidelines.

Researching the Problem

Scenario 1

Users can dribble the system and use different ways to write a prohibited word. e.g. the filter considers “stupid” a bad word, and it detects different ways to write it like "s_t_u_p_i_d” or ”s7up1d”. Although many of these variations are already mapped, the filter occasionally fails to catch new variations.

Scenario 2

The current tool sometimes blocks non-problematic usernames by catching false positives. As a result, some users become frustrated because they don't know why their username was not accepted. These users will likely attempt to choose another username, or reach out to Scratch's Contact Us page to report the problem they experienced.

It is important for the moderation team to be aware of and add new variations of prohibited words while also addressing current user requests to ensure the filter is up-to-date. However, iterating in this tool requires the work of three teams: Communications,  Moderation, and Engineering. It is a time consuming task, and is especially frustrating for the Moderation Team. The graphics below show the journey for the two scenarios:

Scenario 1

Scenario 2

Use case: let's use Scenario 2 as an example

A Teacher fails to create 17 new usernames for his class of students. The Teacher sends a message to Scratch's “Contact Us” page saying that he doesn’t understand why username creation wasn't successful.

Someone from the Communication team contacts a Moderator. The moderator then sends a message containing the list of all the names the Teacher tried to create to the Engineer responsible for the filter tool.

The Engineer debugs the list and discovers that ”aar23yellow21” was the problematic username because the filter interprets ”aar23” as a variation of the word ”arse”.

The Engineer explains the problem to the moderator, who then concludes that this username should not be considered problematic. The moderator decides that the best approach to this situation is to allow the username ”aar23yellow21”. Because the current tool is not user-friendly and requires knowledge that involves backend concepts, a dedicated Engineer will need to interrupt their ongoing tasks to implement the new filter in the current tool:

How does this tool work and why is it not user-friendly?

The tool is hosted on a one page form, with 4 fields that need to be completed in order to create a new filter pattern: label, pattern, blocking_pattern and normalization_overrides. They all use complex understanding of pattern concepts, and although there are instructions next to each field, they are unclear. I also immediately identified that the following issues with the interface design makes it even more difficult to understand the instructions outlined on the page:

The types of fields and inputs to choose from are not adequate in some cases;

The display is not efficient;

Space between fields causes ambiguity because it isn't clear which instructions belong to which field.

The Solution

Create a user friendly tool so the Moderation Team can use it themselves to quickly update the filter tool without the need of engineer work.

Opportunity areas: Researching with main parties

I joined the #filter_requests Slack Channel to analyze work flows and I conducted interviews with main stakeholders to understand frustrations, expectations and needs.

Engineering Team - I learned coding concepts used to design the new tool.

I interviewed the Full Stack Engineer who was responsible for building the current tool and also gained initial knowledge about Regex, the code language used to build this tool.

Moderation Team - I learned about how moderators conduct decision making to ensure the safety of the Scratch Community.

I interviewed three senior moderators from a team of 60; one of them was an expert moderator with more than 10 years of experience who showed me other moderation tools and the logic behind coding patterns. The moderators were also able to provide me with additional context in particular use cases and introduced me to the Slack Channel where these requests were made.

Communications Team - I learned about how users contact Scratch and the internal process executed to solve their specific problems.

A Communications Specialist shared with me the different ways their team received requests associated with username issues, either through: Contact us, direct email, or comments in the community.

Scratch Users - I built a complex website architecture to understand the different types of users and how they create their usernames, as well as familiarize myself with how this related to moderation decisions.

Scratch has three different types of users: common users, teachers, and students, whose accounts are created in different ways. In short, teachers and common users can choose their usernames when creating their profiles, while students require teacher input to start the account creation process.

Identifying pain points and opportunity areas: Giving autonomy to Moderators

Moderators rely on Engineers to uncover the reasoning behind problematic variations when non-problematic usernames are blocked.

Design an accessible, user friendly test username section.

Allowed patterns only exist if there is a blocking pattern created.

Design an interface that connects blocking and allowed patterns.

Due to the complexity of the filter tool, technical language needs to be used.

Design an intuitive product: utilize icons and self-explanatory titles.

Solution/opportunities

Pain points

Design an accessible, user friendly library list section.

The only way to look at the list of words currently in the filter tool is by accessing the backend.

Implement guiding actions and tool tips that link to explanatory examples, and show confirmation messages.

There are limitations regarding the way filters work, therefore, we can not change the logic of the system.

Follow an incremental process.

There is an urgency for this new tool to work.

Development

With the research done, I designed the new tool's initial screens in Figma. I worked with Scratch's Design System assets that I helped build in order to make construction more efficient and ensure that the new tool was aligned with Scratch's brand. Moreover, during the implementation process, it became more efficient for developers to reuse the variables and components they were already familiar with.

I also built prototypes for all screen interactions to enhance communication with the team during meetings with stakeholders.

I created a section in the app to accommodate a library of filters and redesigned the experience for moderators to manage them

The filters section contains the list with all the filters added to the tool. Here, moderators have the autonomy to search for specific names, look for specific types of filter lists (for example, if they are White, Black or Macro lists), and edit and delete existing ones.

I identified that the words used were confusing for the moderators. Together, we decided on new names that made more sense and were as self-explanatory as possible. Label changes to Identifier and Pattern retained the same name, but I created two categories to accommodate them: Blocking Pattern and Allowed Pattern.

The colors were chosen according to names moderators were familiar with: a White list contains Allowed patterns, and a Black List contains Blocked Patterns.

I learned during my research that one can only create Allowed Patterns if there is an existing Blocked Pattern associated with it. Because of this, I developed a card system where the Block Pattern is independent, and the Allowed Pattern only exists inside the Block Pattern.

Adding and editing new filters

I added an “add new” filter button to the filter’s library screen. Once clicked, this button opens a modal that contains the main information in the previous form version.

To improve the user’s experience, I decided to fragment the form utilizing a step-by-step approach with call-to-action buttons to prevent users from feeling overwhelmed with all the information they needed to complete.

Since there are technical terms and actions to be executed, we decided to add tooltips to explain each field and link to more advanced explanations.

Confirmation messages

I developed a communication modal to provide user feedback. If the created filter works, the user will see modals with loading and verified filter messages. If the filter is not effective or there is an error, the modal will display a link for the user to read more about the potential problems with the filter.

Test Usernames Section

I designed a section to allow moderators to check problematic usernames and find blocking patterns. Moderators can type the username and the tool will show the results associated with the username, the moderator can then take the preferred action from the same screen.

The video below illustrates how the action in Scenario 2 would be performed: the moderator tested all 17 listed usernames in the test section and discovers that ”aar23yellow21” was the problematic username. The moderator made the decision that the easiest way to solve this issue was to add ”aar23yellow21” as an allowed pattern.

Moreover, test results show that this section is the most important to moderators, so it became the tool’s default page.

Stakeholder Feedback and Loops of Iteration

During the development process, the team decided that, due to the natural complexity of the tool, gathering everyone in the same room to align expectations and avoid miscommunication would be the best approach for feedback.

There were many loop feedback-iteration meetings that occurred before the final result was finalized. I conducted usability tests with moderators, using the prototype I created in Figma, to find pain points and ensure that they could use the interface. I also continued to make adjustments to the tool based on their feedback in these usability tests.

After conversations with engineers and moderators, we collectively decided to make this project incremental due to the urgent need for the tool, Because of this reason, we decided to forgo some ideas from the first version, like:

User-friendly normalization override;

Ability to test more than one filter at the time;

Add a button to publish changes in order to enhance the backend.

Outcomes

Business

Although the project has not been implemented yet, we anticipate several positive business outcomes based on our research and prototype testing:

Increased Efficiency: By providing moderators with a user-friendly tool, we expect a significant reduction in the time needed to manage username filters, allowing the moderation team to address 95% of requests without engineer intervention.

Enhanced User Experience: The improved feedback system aims to reduce user frustration by offering clear, actionable information when usernames are blocked, potentially lowering support requests.

Operational Cost Savings: Streamlining the moderation process is expected to decrease the reliance on engineers, leading to operational cost savings and more efficient resource allocation.

Improved Community Safety: With an intuitive tool for managing username filters, we anticipate a more robust filtering system that better protects the community by efficiently catching inappropriate usernames.

These projected outcomes are based on extensive user testing and stakeholder feedback, providing a strong foundation for the expected benefits post-implementation.

Professional growth

Working on the Moderation Tool project has significantly contributed to my professional development in several key areas:

User-Centered Design: I deepened my understanding of user needs by conducting extensive research and usability testing, ensuring that the tool effectively addresses the challenges faced by moderators.

Collaboration: This project honed my collaboration skills, as I worked closely with engineers, stakeholders, and users to gather feedback and iterate on the design.

Problem-Solving: Developing innovative solutions to streamline the moderation process and improve user feedback mechanisms enhanced my problem-solving abilities.

Project Management: Managing the project from concept to prototype sharpened my project management skills, including timeline adherence and resource allocation.

These experiences have collectively enriched my expertise, preparing me for future challenges in digital product design.