
New research from Binghamton University, State University of New York, suggests that a proposed machine learning framework and expanded use of blockchain technology can help prevent the spread of fake news. The study, led by Thi Tran, assistant professor of management information systems at Binghamton University’s School of Management, offers tools for recognizing patterns in misinformation and helps content creators focus on areas where the misinformation is likely to cause the most public harm.
“I hope this research helps us educate more people about being aware of the patterns,” Tran said, “so they know when to verify something before sharing it and are more alert to mismatches between the headline and the content itself, which would keep the misinformation from spreading unintentionally.”
Tran’s research proposed using machine learning systems to determine the extent to which content could harm its audience. Machine learning is a branch of artificial intelligence and computer science that uses data and algorithms to imitate the way humans learn while gradually improving its accuracy.
For example, the framework would identify stories that circulated during the height of the COVID-19 pandemic, promoting false alternate treatments to the vaccine. It would use data and algorithms to detect indicators of misinformation and use those examples to improve the detection process.
The framework would also consider user characteristics, such as prior experience or knowledge about fake news, to help create a harm index. The index would reflect the severity of possible harm to a person in certain contexts if they were exposed to and victimized by the misinformation.
“We’re most likely to care about fake news if it causes harm that impacts readers or audiences. If people perceive there’s no harm, they’re more likely to share the misinformation,” Tran said. “The harms come from whether audiences act according to claims from the misinformation, or if they refuse the proper action because of it. If we have a systematic way of identifying where misinformation will do the most harm, that will help us know where to focus on mitigation.”
Based on the information gathered, Tran said, the machine learning system could help fake news mitigators discern which messages are likely to be the most damaging if allowed to spread unchallenged.
“Your educational level or political beliefs, among other things, can play a role in whether you are likely to trust one misinformation message or not and those factors can be learned by the machine learning system,” Tran said. “For example, the system can suggest, according to the features of a message and your personality and background and so on, that it’s 70% likely that you’ll become a victim to that specific misinformation message.”
The study by Tran investigates the potential for using blockchain technology as a tool to combat fake news. The research builds on previous studies by exploring the acceptability of such systems among users.
Tran’s proposal is to survey 1,000 individuals from two groups: fake news mitigators (government organizations, news outlets, and social network administrators) and content users who may be exposed to fake news. The survey would present three existing blockchain systems and assess the participants’ willingness to use those systems in different scenarios.
According to Tran, one of the favorable aspects of blockchain is its traceability feature, which can help in identifying and categorizing sources of misinformation, enabling the recognition of patterns.
“The research model I’ve built out allows us to test different theories and then prove which is the best way for us to convince people to use something from blockchain to combat misinformation,” Tran said.