
Amnesty International has issued a strong condemnation of X, the social media platform formerly known as Twitter and owned by Elon Musk, for what it describes as a “central role” in fueling the 2024 UK riots. The unrest was ignited by the tragic murders of three young girls at a dance class in Southport in July 2024, but quickly escalated in part due to the proliferation of misinformation online.
Amnesty’s recent technical analysis points to X’s algorithmic design as a key contributing factor. The organization asserts that the platform systematically prioritizes content that generates outrage and provokes heated engagement, while lacking adequate safeguards to prevent or reduce harmful consequences. This, Amnesty claims, enabled the rapid spread of false claims about the religion and immigration status of the accused murderer. Despite police later confirming that the individual was a British-born Christian with no known political or religious motives, misinformation circulated widely on X, reaching tens of millions of users.
According to Amnesty’s findings, much of this misinformation was propagated by far-right influencers and amplified by accounts with significant followings—including that of Elon Musk himself. During the period of unrest, Musk reportedly posted dozens of times, sharing and boosting narratives from far-right and Islamophobic sources. These actions, alongside Musk’s use of inflammatory rhetoric around civil strife and critiques of the UK’s immigration policies, created widespread public alarm and further fanned the flames of disorder.
The riots that followed were marked by orchestrated attacks on migrants and Muslims, including vandalism and attempts at arson targeting places of worship and businesses. The violence spread to several cities across England and Northern Ireland, with far-right groups leveraging X to coordinate their activities and rally participants.
Amnesty, along with several other civil rights organizations, holds X’s lack of robust content moderation partially responsible for translating digital misinformation and hate speech into real-world violence. The group further asserts that recent decisions by Musk to reinstate previously banned extremist figures exacerbated the problem, undermining existing safety measures and fostering a permissive environment for harmful content.