DH Latest NewsDH NEWSLatest NewsNewsNEWSTechnologyInternationalNewsMobile Apps

Twitter tests new feature that auto-blocks accounts for hateful remarks

Twitter is testing a new ‘Safety Mode,’ which automatically blocks users who send unsolicited messages that are judged disrespectful or hostile. The social media behemoth, which has been chastised for failing to address an onslaught of online toxicity, claimed that the new setting will ‘reduce the burden on people dealing with unwelcome interactions.’

If an account’s Safety Mode is set, outsiders who interrupt discussions with language that is flagged as ‘potentially harmful’ will be automatically blocked. The automatic blocks are temporary, lasting seven days.

The function will be tested by a select group of users who have been nominated, with a focus on ‘people from marginalised communities and female journalists,’ two groups who face disproportionately harsh treatment on the network.

Twitter explained in a blog post: ‘Safety Mode is a feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions. When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the tweet’s content and the relationship between the tweet author and replier. Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be auto-blocked. Authors of Tweets found by our technology to be harmful or uninvited will be auto-blocked, meaning they’ll temporarily be unable to follow your account, see your Tweets, or send you Direct Messages.’

Users who send potentially hazardous messages would face no further automated action aside from not being able to contact them if safety mode was enabled.

Also Read: Instagram requires users to share birth dates for youth safety

Jarrod Doherty, Twitter’s senior project manager said: ‘We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations. Our goal is to better protect the individual on the receiving end of tweets by reducing the prevalence and visibility of harmful remarks. Throughout the product development process, we conducted several listening and feedback sessions for trusted partners with expertise in online safety, mental health, and human rights.’

Twitter also started testing a button last month that allows users to report disinformation for the first time on the platform.

shortlink

Post Your Comments


Back to top button