In a move to clean up the site, Twitter is now testing a new prompt that’ll ask users to reconsider their language before sending replies.
In a post, the company said the prompt would give you the option to revise the replies before sending them if it contains language that the algorithm considers to be harmful.
Harmful has not been defined in this context, although we presume it might be in line with the company’s hate speech policy, which is usually under constant upgrading.
The test is currently being done among select iOS users.
“When things get heated, you may say things you don’t mean,” the company wrote in a tweet.
A Twitter spokesperson reiterates that once the tweet is sent, there’s no option to edit it. This concurs with what the company’s rigid nature about bringing the edit option, which might never happen, according to the company’s CEO, Jack Dorsey.
Does this kill your free speech on Twitter? Apparently, No. The company spokesperson, in reply to Bloomberg, said Twitter users will still be able to send the original reply even after the prompt. As much as the company is trying to clean up the platform from “harmful” language, free speech still wants to be upheld.
This feature should sound familiar if you’re a regular Instagram user, however. In the past, the Facebook-owned platform also rolled out a similar algorithm dubbed AI Rethink that scans users’ replies before they are sent out.
The AI tool uses “Are you sure you want to post this?” to let users reconsider what they have written before replying. And just like what’s coming on Twitter, Instagram users can still hit the post button nonetheless.