Round Table India
You Are Reading
Manufacturing Consent: The Propaganda Model for Social Media
2
pranav jeevan p

Pranav Jeevan P

The rise of social media has brought about a new era of communication and dissemination of information. However, the complete dominance of a few social media companies: WhatsApp, Instagram, Threads and Facebook owned by Meta, X (formerly twitter) and YouTube owned by Google, has raised concerns about their monopolized power and its impact on democracy. Social media companies have complete control over the flow of information and speech, which can lead to censorship, bias, and the suppression of alternative viewpoints. The concentration of power in the hands of these few companies can limit consumer choice and privacy, as users are forced to rely on a small number of platforms for their social media needs. Social media platforms also enable the spread of misinformation, propaganda, and hate speech, which can undermine democratic institutions and processes.

Social media platforms in India have become a breeding ground for manipulative practices to shape narratives and build public opinion. This is done through influencers and groups, both voluntary and ideologically driven, who amplifies the manufactured narratives facilitated by IT cells and right wing think tanks. These narratives reach the public through a network of tweets, Facebook posts, WhatsApp messages, Instagram reels and YouTube Videos. It employs astroturfing, a technique used to create the illusion of grassroots support by orchestrating the narrative for targeted harassment of academics, campaigns against activists and human rights defenders, and to discredit protests, such as the farmers protest, against the policies of ruling government. They also try to legitimize false claims made by politicians and propaganda movies, while aiming to discredit and attack fact-checkers.

Influencers who have millions of followers are paid to promote specific messages through their handles. This ecosystem has expanded, with right-wing individuals benefiting financially for amplifying propaganda. Distortion of historic facts and manipulation of historic narratives to fit the agenda of ruling regime has become rampant. These narratives often aim to depict religious minorities as aggressors and incite viewers to adopt a militaristic stance.

Social media platforms have become the hub for right-wing forces to spread their agenda that seek to marginalize communities and suppress criticism. India is WhatsApp’s largest market, with more than 500 million users. Social media researchers, government officials and WhatsApp itself have acknowledged the platform’s potential as a tool to fan polarization and stoke violence. According to a field study conducted in 2020, Indian users told Meta researchers that they “saw a large amount of content that encouraged conflict, hatred and violence” that was “mostly targeted toward Muslims on WhatsApp groups.”

Similar to the lack of representation of Dalit Bahujan Adivasis (DBA) in Indian media houses, there is a lack of representation of their voices within these social media companies who build and run these platforms. This enables a preference for savarna narratives to be encouraged and Bahujan voices to be silenced. Marginalized groups regularly face hate and harassment based on their caste and are targeted with abuse for availing reservations. Such hateful expression often emerges as a reaction from savarnas to DBA resistance and social justice movements. Hateful expression could silence marginalized groups and individuals, exclude them from conversations, and adversely impact their physical and mental well-being. The community guidelines, policies, and transparency reports of Facebook (Meta), X (Twitter), and YouTube showed that they incorporated “caste” as a protected characteristic in their hate speech and harassment policies only few years back – many years after entering Indian market – showing a disregard for the regional context of their users. Even after these policy changes, many platforms –whose forms for reporting harmful content list gender and race – still do not list caste. The lack of diversity in hiring of moderators, who are mostly savarnas, further act as way to silence the DBA voices. Casteist attacks often go unchecked, with platforms claiming that they do not violate “community guidelines.” Even public callouts of discrimination lead to no tangible consequences because both the ones curating guidelines for these platforms and those who gain prominence on them are both savarnas.

When Meta found a large collection of fake accounts spreading disinformation that put Kashmiri journalists in danger, the Indian executives of Meta argued against it and let the disinformation continue to spread for years. They warned against antagonizing the government because they were worried that they could be imprisoned. Facebook has fallen short of its professed ideals of curbing fake news in India, under pressure from its right-wing government. When harmful content is spread by right-wing politicians or their allies on Facebook, the platform has been reluctant to act and take down the hate speech.

For the big social media companies which have seen user numbers in the US and EU plateau, and profit being critical to its Wall Street shareholders, see India as the largest remaining market, with its substantial English-speaking and rapidly growing population. The number of Facebook users in India is greater than the entire U.S. population; India is also one of the biggest markets for X (Twitter). They do not want to filter content even if it violates their hate speech policies, if taking it down can antagonize the ruling government, which is essential to its market interests. Facebook’s cautious approach to moderating pro-government content in India was often exacerbated by their executives in India who were hired for their political experience or relationships with the government, and who held political views that aligned with the ruling government. Facebook’s top policy person and lobbyist in the region, Ankhi Das, told employees that it would hurt the company’s business prospects to take down posts calling for violence against minorities. The interviews and documents show that local Facebook executives failed to take down videos and posts of Hindu nationalist leaders, even when they openly called for violence against minorities. Facebook did not stop hate speech or calls for action ahead of violence before the 2020 Delhi pogrom.

Compared to Facebook, Twitter had been more forceful in pushing back against the Indian government. Facebook’s India team was especially nervous after police raids on Twitter in 2021 when the Indian government was feuding with Twitter over its refusal to take down tweets from protesting farmers. Officials dispatched police to the home of Twitter’s India head and anti-terrorism units to two Twitter offices. Some officials publicly threatened Twitter executives with jail time. The police raids and public comments from government officials criticizing the company had scared off firms that Twitter had planned to use for promotion. Twitter had promised Wall Street investors 3x user growth, and the only way that was going to be possible was with India and being in good side of the government.

The government on April 6 announced a state-run fact-checking unit that will have sweeping powers to label any piece of information related to the government as “fake, false or misleading” and have it removed from social media. They have tweaked the tech rules that now require platforms such as Facebook, Twitter, and Instagram to take down content flagged by this fact-checking body. Internet service providers are also expected to block URLs to such content. Failure to comply could result in the platforms losing safe harbor protection that safeguards them from legal action against any content posted by their users. So basically, any content which is critical of the government or challenges the government narratives can be removed, silencing opposition and activist voices.

The current government is setting an example for how an authoritarian government can dictate to the giant social media platforms what content they must preserve and what they must remove, regardless of the companies’ rules. Countries including Brazil, Nigeria and Turkey are following the Indian model. In 2021, Brazil’s then president, Jair Bolsonaro, sought to prohibit social networks from removing posts, including his own, that questioned whether Brazil’s elections would be rigged. In Nigeria, then-President Muhammadu Buhari banned Twitter after it removed one of his tweets threatening a severe crackdown against rebels.

The profiteering and bias of the big social media giants is getting exposed in the current attack on Gaza by Israel. Meta-owned Instagram allegedly shadow-banned pro-Palestinian users who reported a sudden drop in their followers and views while pro-Israeli and anti-Arab hate speech continued unabated. During the Russian invasion of Ukraine in February 2022, Meta proactively blocked access to Russian State-controlled media, took down accounts belonging to state-affiliated media, prohibited ads from such handles, and issued fact-check labels.X (then-Twitter) also added labels on tweets sharing links to Russian State-affiliated media websites. Russia retaliated by banning Facebook, Instagram, and X in the country. But, in the context of the Israel-Palestine conflict, these platforms did not take equally drastic measures to counter state-sponsored narratives. The Israeli Foreign Affairs Ministry was pushing paid ads on X and YouTube to shape public opinion around the war. But in 2022, Russia and Ukraine were temporarily stopped from buying ads on X to “ensure critical public safety information is elevated.” Russian media could not buy ads on YouTube either. Additionally, Meta’s ban on Hamas is mirrored by its alleged inaction against pro-Russian mercenary outfits like the Wagner group which has been allowed to skirt the company’s policy on “Dangerous Organizations”. These inconsistencies in how the major social media platforms have responded to the two different events, the Russian-Ukraine war, and the Israel-Palestine conflict, shows that they easily abide by the narratives that favor the Western governments.

These social media firms which make billions in profit have the responsibility to moderate content fairly and block fake news. But the actions taken by these platforms out of greed has made them least prepared to handle the rise in propaganda. The paid verification and ad policies which promise boosted engagement and larger audience reach allow the state-sponsored propaganda voices to reach a larger population drowning out the opposition and counter narratives. Social media companies have also cut down hundreds of content moderation jobs during the ongoing wave of tech layoffs, making them less capable of curbing online abuse. Among the wide range of job functions affected by those reductions are “trust and safety” teams that enforce content policies and counter hate speech and disinformation. Alphabet reportedly reduced the workforce of a Google unit that builds content moderation tools by at least a third. Meta’s main subcontractor for content moderation in Africa was cutting 200 employees as it shifted away from content review services. X (Twitter’s) mass layoffs affected many staffers charged with curbing prohibited content like hate speech and targeted harassment, and the company disbanded its Trust and Safety Council.

In his book Manufacturing Consent, Chomsky introduces the propaganda model for the manufacture of public consent. The dominant social media firms are large profit-based entities which cater to the financial interests of their owners, stakeholders and controlling investors. The major revenue of these companies comes through advertising, using data on users’ interests, behaviours, and demographics to target ads specific to groups, and by selling this data to third party companies. Since majority of the revenue comes from advertising, they must cater to the political prejudices and economic desires of their advertisers, which are mostly large corporations, or governments, at the expense of public interest. This results in the unconstrained spread of propaganda and simultaneous suppression of dissent voices in social media. Social media platforms can be used to manipulate public opinion by targeting specific groups of people with propaganda messages. This can create filter bubbles or echo chambers that reinforce existing beliefs and limit exposure to alternative viewpoints. It can also lead to the spread of extremist views and the normalization of hate speech. Social media platforms can threaten democracy by spreading disinformation, manipulating public opinion, undermining democratic institutions, and threatening privacy and security. It is important for users to be aware of these risks and to critically evaluate the information they see on social media platforms.

~~~

Pranav Jeevan P is currently a PhD candidate in Artificial Intelligence at IIT Bombay. He has earlier studied quantum computing in IIT Madras and Robotics at IIT Kanpur.

Leave a Reply