by Danny Bradbury Just about to share an article with a sensational headline? Stop! Did you at least read it first? Sharing clickbait containing spu
Just about to share an article with a sensational headline? Stop! Did you at least read it first?
Sharing clickbait containing spurious content without bothering to check it over is a perennial problem for attention-challenged social media users (hey! squirrel!) and now Twitter wants to help stop it. The company has launched a test feature that reminds you to read articles before retweeting them.
Reportedly launched on just a few US Android phones for now, the service will warn users if they try to retweet articles that they haven’t opened, the company announced in a tweet from its support channel:
Sharing an article can spark conversation, so you may want to read it before you Tweet it.
To help promote informed discussion, we’re testing a new prompt on Android –– when you Retweet an article that you haven’t opened on Twitter, we may ask if you’d like to open it first.
— Twitter Support (@TwitterSupport) June 10, 2020
So if you haven’t clicked on a link in Twitter and you try to retweet it, the service will cough politely and ask if you’re sure. You can go ahead and retweet it anyway, if you so choose, meaning that devoted readers of Tin Foil Hat Times, Conspiracy Monthly, or the National Shouty Review can still happily spread the crazy.
It’s a service for folks that want to do the right thing and just need a reminder now and then to hold back on the outrage long enough to collect the facts.
Reactions to the new feature were predictably split. On the one hand was the ‘good job for stopping the thoughtless spread of disinformation’ crowd:
This is a really good idea. I wish you’d thought of this years ago, TBH.
(Most people will skip through the prompt, but still.)
— Jedi, Interrupted 🏳️🌈 (@JediCounselor) June 10, 2020
On the other side is the ‘hands off my tweets’ crowd:
We are ADULTS. We don’t need you to parent us.
The entire world
— 🌟Queen of HOPE 🌟 (@Quirlygirl) June 10, 2020
We’ll lump this group in with those who criticize Twitter for violating their first amendment rights in the deluded belief that it’s a publicly-owned service as opposed to a private company that owns the platform and provides it for free, and whose main responsibility is to shareholders and which can do what it wants.
There’s a third group that looked at the feature in a broader context, suggesting that Twitter focus on solving other problems (like dealing with white supremacist tweets) before delving into this kind of thing:
How about if you promote informed discussion by banning Nazis?
— Patrick Mooney (@patrick_mooney) June 10, 2020
There are indeed other issues facing the company, but they’re diverse ones that it’s trying to solve at scale.
The company has taken some other measures, including the introduction of a feature flagging misleading Tweets and pointing users to verified facts. It later famously used that feature on a tweet from President Trump.
It tried to address hateful conduct recently, updating its rules to cover language that dehumanizes people on the basis of age, disability, or disease. It also experimented with a service that would warn people if there was harmful content in their reply:
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Amid the judgements and praise, there were some interesting suggestions. Some people wanted the service to tell others when a person had retweeted something without sharing it, by flagging their message. Others point out that they may already have read an article on another device even if they haven’t opened it on Twitter.
Twitter support describes this as an experiment, though, which is presumably why it’s running a canary test of its code. There are lots of ways that it could develop the feature if it ends up meeting its opaque success criteria.
In the meantime, if users don’t like it, they could always find a Mastadon instance with rules they do like – and donate to the operators, thereby taking a stake in the system they’re using.