Social media platforms are strictly tightening their restrictions against online bullying, with Instagram being the latest to take steps by rolling out a new feature, powered by AI.
The new tool will notify people when their comment may be considered offensive before it is posted.
Instagram is following the steps of other prominent social media platforms. In late 2018, Instagram's parent company, Facebook, introduced a way for people to hide or delete multiple comments at once from the options menu of their post. The social media giant was also testing ways to more easily search for and block offensive words from appearing in comments.
Twitter has also been taking steps to defeat cyber-bullying on the platform over the last few years. It has recently announced that it is tightening its restrictions against hateful conduct by removing tweets that dehumanize others on the basis of religion.
Instagram has been using AI for years to detect bullying and other types of harmful content in comments, photos and videos. According to the photo and video-sharing social networking, the tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram, and these are only two steps on a longer path.
"This intervention gives people a chance to reflect and undo their comment … Early tests have shown that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect," explained Instagram in a press release.
Additionally, Instagram will soon begin testing "Restrict"—a new way for people to protect their accounts from unwanted interactions without notifying someone who may be targeting them. Restricted people won’t be able to see when the user that restricted them is active on Instagram or when they read their direct messages.
Last year, Instagram introduced a bullying comment filter to proactively detect and hide bullying comments and also launched a new tool to identify and report bullying in photos.