TikTok rolls out new commenting features aimed at preventing bullying


On the heels of last week’s launch of a new Q&A format for creators responding to viewer questions, TikTok today announced it’s rolling out new commenting features. Creators will now be able to control which comments can be posted on their content, before those comments go live. Another new addition, aimed at users who are commenting, will pop up a box that prompts the user to reconsider posting a comment that may be inappropriate or unkind.

TikTok says the goal with the new features is to maintain a supportive, positive environment where people can focus on being creative and finding community.

Image Credits: TikTok

Instead of reactively removing offensive comments, creators who choose to use the new “Filter All Comments” feature will instead get to choose which comments appear on their videos. When enabled, they’ll need to go through each comment individually to approve them using a new comment management tool.

This feature builds on TikTok’s existing comment controls, which allow creators to filter spam and other offensive comments or filter by keywords, similar to other social apps like Instagram.

Image Credits: TikTok

But Filter All Comments means comments won’t even go live at all unless the creator approves them. This gives creators full control over their presence on the platform and could prevent bullying and abuse. But it also could allow creators to get away with spreading false information without any pushback, or making it seem like they’re more well liked than they actually are.

The other feature will instead push users to reconsider posting bad comments, meaning those that appear to be bullying or inappropriate. It will also remind users of TikTok’s Community Guidelines and allows them to edit their comments before sharing.

Image Credits: TikTok

These sort of “nudges” help by slowing people down and giving them time to pause and think about what they’re saying, instead of being quick to react. Already, TikTok is using nudges to ask users whether or not they want to share unsubstantiated claims that fact checkers can’t verify, in an attempt to slow the spread of misinformation.

It has taken other social networks years to add prompts that ask users to stop and think before posting. Twitter, for example, just last month said it was running another test that asks users to reconsider harmful replies. It’s been running variations of this same test for nearly a year.

TikTok says it’s consulting with industry partners in developing its new policies and features, and today also announced a partnership with the Cyberbullying Research Center (CRC), which develops research around cyberbullying and online abuse and misuse. The company says it will collaborate with CRC to develop other initiatives going forward to help promote a positive environment.


TikTok rolls out new commenting features aimed at preventing bullying

All Rights Of This Article Reserved To TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: