Facebook dithered in curbing divisive user content in India

Enlarge this image
Facebook lacked enough local language moderators to stop misinformation that at times led to real-world violence, according to leaked documents obtained by The Associated Press.
Matt Rourke/AP
hide caption
toggle caption
Matt Rourke/AP

Technology
Here are 4 key points from the Facebook whistleblower’s testimony on Capitol Hill
The memo, circulated with other employees, did not answer that question. But it did expose how the platform’s own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear «blind spots,» particularly in «local language content.» They said they hoped these findings would start conversations on how to avoid such «integrity harms,» especially for those who «differ significantly» from the typical U.S. user.
Even though the research was conducted during three weeks that weren’t an average representation, they acknowledged that it did show how such «unmoderated» and problematic content «could totally take over» during «a major crisis event.»
The Facebook spokesperson said the test study «inspired deeper, more rigorous analysis» of its recommendation systems and «contributed to product changes to improve them.»
«Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,» the spokesperson said.
Other research files on misinformation in India highlight just how massive a problem it is for the platform.
In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that «clearly labeling information would make their lives easier.»
Again, it was noted that the platform didn’t have enough local language fact-checkers, which meant a lot of content went unverified.
Alongside misinformation, the leaked documents reveal another problem dogging Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.
India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.
In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

Untangling Disinformation
How the ‘Stop the Steal’ movement outwitted Facebook ahead of the Jan. 6 insurrection
In April, misinformation targeting Muslims again went viral on its platform as the hashtag «Coronajihad» flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.
For Mohammad Abbas, a 54-year-old Muslim preacher in New Delhi, those messages were alarming.
Some video clips and posts purportedly showed Muslims spitting on authorities and hospital staff. They were quickly proven to be fake, but by then India’s communal fault lines, still stressed by deadly riots a month earlier, were again split wide open.
The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims. Thousands from the community, including Abbas, were confined to institutional quarantine for weeks across the country. Some were even sent to jails, only to be later exonerated by courts.
«People shared fake videos on Facebook claiming Muslims spread the virus. What started as lies on Facebook became truth for millions of people,» Abbas said.
Criticisms of Facebook’s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modi’s party as a «dangerous individual» — a classification that would ban him from the platform — after a series of anti-Muslim posts from his account.
The documents reveal the leadership dithered on the decision, prompting concerns by some employees, of whom one wrote that Facebook was only designating non-Hindu extremist organizations as «dangerous.»
The documents also show how the company’s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. At the time, she had also argued that classifying the politician as dangerous would hurt Facebook’s prospects in India.
The author of a December 2020 internal document on the influence of powerful political actors on Facebook policy decisions notes that «Facebook routinely makes exceptions for powerful actors when enforcing content policy.» The document also cites a former Facebook chief security officer saying that outside of the U.S., «local policy heads are generally pulled from the ruling political party and are rarely drawn from disadvantaged ethnic groups, religious creeds or casts» which «naturally bends decision-making towards the powerful.»
Months later the India official quit Facebook. The company also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.
«Several Muslim colleagues have been deeply disturbed/hurt by some of the language used in posts from the Indian policy leadership on their personal FB profile,» an employee wrote.
Another wrote that «barbarism» was being allowed to «flourish on our network.»
It’s a problem that has continued for Facebook, according to the leaked files.
As recently as March this year, the company was internally debating whether it could control the «fear mongering, anti-Muslim narratives» pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group which Modi is also a part of, on its platform.
In one document titled «Lotus Mahal,» the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content, ranging from «calls to oust Muslim populations from India» and «Love Jihad,» an unproven conspiracy theory by Hindu hard-liners who accuse Muslim men of using interfaith marriages to coerce Hindu women to change their religion.
The research found that much of this content was «never flagged or actioned» since Facebook lacked «classifiers» and «moderators» in Hindi and Bengali languages. Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.
The employees also wrote that Facebook hadn’t yet «put forth a nomination for designation of this group given political sensitivities.»
The company said its designations process includes a review of each case by relevant teams across the company and are agnostic to region, ideology or religion and focus instead on indicators of violence and hate. It did not, however, reveal whether the Hindu nationalist group had since been designated as «dangerous.»
Обсудим?
Смотрите также: