ISLAMABAD: Meta’s declaration of a privacy enhancement related to its AI systems has sparked worldwide backlash, prompting internet users to escalate the “Delete Facebook” movement on various platforms.
Meta stated that the revised policy permits the company to utilize user engagements with AI-driven chatbots to enhance its AI systems and provide tailored suggestions. The company clarified that this change excludes messages, discussions with friends and family or any encrypted communications.
Nonetheless, the explanation has alleviated users’ worries. Social platforms have been flooded with backlash as numerous individuals accuse Meta of obtaining access to images, audio recordings, messages and AI conversation records for analysis. Users contend that the updated policy increases the potential for data abuse.
In a declaration, Meta emphasized that it will neither examine nor utilize sensitive details, like religious affiliations, health records, sexual preferences, political opinions or personal history for AI development or targeted marketing.
The company further explained that voice-activated AI tools will turn on the microphone with direct user consent and the screen will visibly show when audio functions are active.
As debates intensify, concerns about digital privacy and the boundaries of AI training are once again pushing users to question the safety of their data, reviving global calls to step away from the platform entirely.
Meta hides evidence of harms caused by social media
What is the social responsibility of social media companies? Users’ mental and social safety comes first or business interests? But in this connection, Meta is silent and hiding evidence of the harm caused by its social platforms.
A lawsuit filed by American school districts has revealed that Meta stopped research on the effects of Facebook and Instagram on mental health because the study proved that their platforms were harming users’ mental health. According to the Reuters news agency, these documents have revealed that the experiments conducted by the ‘company itself’ to test the effects of Facebook and Instagram on mental health concluded that users who stayed away from these platforms for a few days felt calmer and less stressed. But the research was stopped before these results reached the outside world.
The lawsuit filed by US school districts alleges that Meta not only concealed the negative evidence but also delayed further research, internally dismissing it as the result of “misleading narratives”. Interestingly, the company’s top executives were privately told that the research findings were accurate. One researcher even compared the situation to the silence of the tobacco industry, which knew the harmful effects but did not inform the public about the truth.
In addition to Meta, the lawsuit also makes serious allegations against other major tech companies, Google, TikTok, and Snapchat. According to the claims, these platforms not only encourage underage users, but in some cases, such decisions were deliberately made that prioritized only increased engagement and business interests over the online safety of children and young people.
The documents also revealed that TikTok offered financial support to children’s rights organizations to strengthen its narrative. That is, the company took steps that enabled these organizations to make statements in favour of TikTok to the public. Similarly, Meta’s internal records also revealed that the company designed some safety features so that they were not used too much. In addition, testing or experimenting with these features was also stopped several times so that users would not reduce their time or engagement on the platform.
Some documents also give the impression that the delay in addressing the risks faced by children within the company was not just a technical issue but a difference in priorities, with Mark Zuckerberg even indicating in a message that building the Metaverse was more important to him than protecting children online. Meta strongly denies all these allegations. The company argues that the quotes in the lawsuit were taken out of context and that the reality is much more complex. According to the spokesperson, Meta has been taking significant steps to protect young users for years, implementing new policies and continuing to collaborate with global experts to address these threats.
However, the court proceedings could take the debate to a new level in the coming months, especially as the question of what social media companies’ social responsibility is, and whether they truly prioritize the mental and social safety of their users over business interests, is increasingly being raised.





