Meta’s social media platforms—Facebook, Instagram, and Threads—have opened up more opportunities for abuse and bickering, following loosened restrictions on speech. The official reason for these changes remains unclear, but it is possible that the company is seeking to attract more users after it canceled a plan to populate the platforms with AI-powered bots. That original plan was abandoned in response to public outcry, leaving Meta to explore other strategies to encourage user engagement.
According to Meta’s updated language policy, “We allow accusations of mental illness or abnormality when they are based on gender or sexual orientation, including political and religious discourses about transgenderism and homosexuality, as well as casual, casual use of words like ‘weird.’” This policy also states, “We allow content that asserts gender restrictions on military, law enforcement, and teaching. We also allow the same content based on sexual orientation when it is based on religious beliefs.”
Another section that once banned dehumanizing references to transgender or non-binary people—such as calling them “it” or referring to women “as household items, property, or objects in general”—has been removed entirely. These changes have raised concerns that hate speech could be more prevalent on the platforms.
Policy Updates
Meta’s oversight board acknowledges these developments and has stated it will monitor the situation closely, given the potential for further harm. In its words, “There are many instances where hate speech can lead to real-life harm, so we’ll be watching this space very closely.”
Additionally, a memo from Meta’s new head of policy, Joel Kaplan, indicates the company is “getting rid of a number of restrictions on topics like immigration, gender identity, and sex, which are the subject of frequent political discussion and debate. It’s okay to say something on TV or in Congress, but not on our platforms.”
Wired reports that these changes have “blindsided” organizations that had partnered with Meta in its now-scuttled moderation efforts. One unnamed editor at a fact-checking organization expressed concern that the fallout from Meta’s decision “will eventually wear us down.”
Ongoing Monitoring and Reactions
Despite the adjustments, Meta maintains that it is simply allowing conversation around sensitive topics to unfold more freely. Critics argue this shift could increase incidents of harassment and targeted hate speech, putting marginalized groups at greater risk, notes NIX Solutions. Meta’s oversight board insists it will keep a close eye on emerging content, and independent organizations are also staying vigilant.
In the meantime, we encourage everyone to stay aware of these changes, and we’ll keep you updated on any new developments. While questions remain about the motivations behind Meta’s decision, the impact on online discourse—and whether it leads to a surge in harmful content—will be closely observed by both the company and external watchdogs.