
Let’s use peer pressure to weed out all the fake news
In recent years, the amount of inaccurate information, otherwise known as ‘fake news,’ has significantly increased. This has led to more extreme views in politics and has influenced people’s opinions on topics such as vaccine safety, climate change, and diversity. Current methods of addressing this issue, such as flagging false posts, have not been very effective. Professor Tali Sharot (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology) says he’s found a solution.
“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true. Here, we have designed a simple way to incentivize trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”
In a recent paper published in Cognition, Professor Sharot and colleagues discovered that individuals are more likely to share statements on social media that they have previously been exposed to. People tend to view repeated information as more likely to be correct, demonstrating the strength of misinformation through repetition.
For their latest study, the researchers aimed to test a potential solution. They used a simulated social media platform, which was utilized by 951 study participants across six experiments. The platforms included users sharing news articles, half of which were inaccurate. Other users could react not only with ‘like’ or ‘dislike’ reactions and repost stories, but in some versions of the experiment, users could also react with ‘trust’ or ‘distrust’ reactions.
The researchers discovered that the incentive structure was well-liked, as people used the trust/distrust buttons more often than the like/dislike buttons. The structure was also effective: users started posting more true information to receive ‘trust’ reactions. Further analysis using computational modeling indicated that after the introduction of trust/distrust reactions, participants paid more attention to the reliability of news stories when deciding whether to repost them.
Facebook’s Like and Share buttons
The Like and Share buttons on Facebook were introduced in 2009 as a way for users to easily interact with and share content on the platform. According to Facebook, the idea for the Like button came from the desire to enable users to provide positive feedback without having to comment on a post.
The Share button, on the other hand, was added to make it easier for users to repost content they found interesting or valuable. Prior to the Share button, users had to copy and paste a post’s URL if they wanted to share it with their own network.
Over time, the Like and Share buttons have become integral parts of Facebook’s user experience, with the Like button being used over 10 billion times a day and the Share button facilitating the spread of countless posts and articles across the platform.
Promoting accurate beliefs
In addition, the researchers found that those who had been using the versions of the platform with trust/distrust buttons ended up with more accurate beliefs.
Ph.D. student Laura Globig (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology) said:
“Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement.”