2021 / iOS + AN + WEB / SHIPPED
User Safety Controls
Defining safety for a community of 90 million+ monthly active users.
Over COVID, Wattpad had a large spike in new users, and with that came a large spike in hate & harassment tickets that operationally overloaded our Support team. We wanted to take this as an opportunity to build out a long overdue feature we’ve been advocating for - the ability to Block users. But we quickly realized that Block couldn’t be the only solution for tackling our user and business problems.
We needed to take a holistic approach to defining safety on Wattpad that protected not only our users, but our internal teams on the frontline as well.
Team: Audience Trust - Product manager, Frontend + Backend engineers, Trust & Safety champions, Community Wellness team
Role: Product Strategy, UX flows & Designs, Interaction design, Stakeholder management, QA
Our team's mission for 2021 was to help users protect themselves from unwanted interactions.
The success metric:
The proxy for a reduction in harassment on the platform is a reduction in the # of unactionable hate & harassment tickets that our Support team receives.
Understanding the problem space: Harassment on Wattpad
To kick off the discovery process we first needed to get a good understanding of the current landscape of harassment on Wattpad. We set out to understand which negative interactions that have the greatest risk/negative impact to our community and business & which interactions generate the most unactionable tickets.
1. Consulting the experts: We worked very closely with our stakeholders from our Trust & Safety (T&S) and Community Wellness (CW) teams to understand the current landscape of user safety on Wattpad. Some things we wanted to know were:
What does Wattpad consider to be harassment?
Which user segments receive the most bullying/toxic interactions - writers or readers?
Operationally, what issues were the team facing in dealing with hate & harassment tickets?
Discussing with Trust & Safety, we decided to use unactionable Hate & Harassment tickets as a metric for harassment on Wattpad. This metric was a signal for issues that users could have solved on their own if the right safety tools were provided.
2. Working with data: We worked with the Analytics team to understand which areas of user interaction that are currently unprotected has the most traffic. We found out that Story Comments have 700 million interactions/month with second being Profile conversations with 6M/month.
An overview of the current landscape of our safety tools we offer to users and how it gets tagged in Zendesk. From this map we could explore opportunity areas to help users stay safe.
3. Survey + interview data: We analyzed data from our monthly Sentiment Survey & App Reviews to pull out the top pain points around safety for our users. In addition, interviews were conducted with 6 of our top writers to see if there were specific pain points our most valuable user segment were facing. The major themes of our user's pain points with platform safety were:
What prevents users from feeling safe?
Understanding our user’s painpoints better, we soon realized Block is not a feature where the outcome is deployment.
What we are really building is a series of safeguards that enable users to feel safe and reduces the exposure of negative interactions. This could either be reactive (block feature) or proactive (anti-bullying marketing campaign).
After presenting our research at the project kick off the team agreed the goal isn't building a Block feature. We need to be grounded in our use cases and prioritize them based off of where the biggest opportunities to solve the problem are. The outcome is a higher user sentiment of platform safety and reduction of bullying tickets.
Understanding the Opportunities
After aligning the team on the core user problem areas, I aligned the team on the current landscape of safety tools we offer to our users. We offer the ability to report story comments, private & public messages, and users. We also have a Mute feature BUT it isn’t fully built out and doesn’t sufficiently protect users from harassers when they are muted. This was one of the core issues users mentioned through our sentiment survey.
With a good understanding of the core user problems around safety & what the gaps are in our current safety tools, I organized a workshop to ideate and prioritize solutions cross-functionally. Our engineers, along with our Trust & Safety and Community Wellness manager also participated.
The prioritized solutions were to build out our Mute functionality to cover Comments. Adding another option within the Report flow to funnel unactionable harassment tickets away from T&S, introducing in-context educational tips into the reporting flow, and Story Block - disabling certain users from accessing writer's stories.
Solutions we shipped! 🎉
• Conducting competitive anaylsis to see what other large social media platforms were doing
• Mapping out all user stories/use cases
• Working with engineers to tweak interaction for hiding muted comments
"I don't like what I'm seeing" Report Option
• Competitive analysis
• Working with our Community Wellness/Support team to set up how it's tracked in Zendesk