Meta pressed to compensate war victims amid claims Facebook inflamed Tigray conflict

[ad_1]

Meta is facing growing calls to set up a restitution fund for victims of the Tigray war, which Facebook is alleged to have fueled leading to over 600,000 deaths and the displacement of millions others across Ethiopia.

Rights group Amnesty International, in a new report, has urged Meta to set up a fund, that will also benefit other victims of conflict around the world, amidst heightened fears that the social site’s presence in “high-risk and conflict-affected areas” could “fuel advocacy of hatred and incite violence against ethnic and religious minorities” in new regions. Amnesty International report outlines how “Meta contributed to human rights abuses in Ethiopia.”

The renewed push for reparation comes just as a case in Kenya, in which Ethiopians are demanding a $1.6 billion settlement from Meta for allegedly fueling the Tigray war, resumes next week. Amnesty International is an interested party in the case.

Amnesty International has also asked Meta to expand its content moderating capabilities in Ethiopia by including 84 languages from the four it currently covers, and publicly acknowledge and apologize for contributing to human rights abuses during the war. The Tigray war broke out in November after conflict between the federal government of Ethiopia, Eritrea and the Tigray People’s Liberation Front (TPLF) escalated in the Northern region of the East African country.

The rights group says Meta’s “Facebook became awash with content inciting violence and advocating hatred,” posts that also dehumanized and discriminated against the Tigrayan community. It blamed Meta’s “surveillance-based business model and engagement-centric algorithms,” that prioritize “engagement at all costs” and profit-first, for normalizing “hate, violence and discrimination against the Tigrayan community.”

“Meta’s content-shaping algorithms are tuned to maximize engagement, and to boost content that is often inflammatory, harmful and divisive, as this is what tends to garner the most attention from users,” the report said.

“In the context of the northern Ethiopia conflict, these algorithms fueled devastating human rights impacts, amplifying content targeting the Tigrayan community across Facebook, Ethiopia’s most popular social media platform – including content which advocated hatred and incited violence, hostility and discrimination,” said the report, which documented lived experiences of Tigray war victims.

Amnesty International says the use of algorithmic virality – where certain content is amplified to reach a wide audience posed significant risks in conflict-prone areas as what happened online could easily spill to violence offline. They faulted Meta for prioritizing engagements over the welfare of Tigrayans, subpar moderation that let disinformation thrive in its platform, and for disregarding earlier warnings on how Facebook was at risk of misuse.

The report recounts how, before the war broke and during the conflict, Meta failed to take heed of warnings from researchers, Facebook Oversight Board, civil society groups and its “Trusted Partners” expressing how Facebook could contribute to mass violence in Ethiopia.

For instance, in June 2020, four months before the war broke out in northern Ethiopia, digital rights organizations sent a letter to Meta about the harmful content circulating on Facebook in Ethiopia, warning that it could “lead to physical violence and other acts of hostility and discrimination against minority groups.”

The letter made a number of recommendations including “ceasing algorithmic amplification of content inciting violence, temporary changes to sharing functionalities, and a human rights impact assessment into the company’s operations in Ethiopia.”

Amnesty International says similar systematic failures were witnessed in Myanmar like the use of an automated content removal system that could not read local typeface and allowed harmful content to stay online. This happened three years before the war in Ethiopia, but the failures were similar.

Like in Myanmar, the report says moderation was bungled in the Northern Africa country despite the nation being in Meta’s list of most at-risk countries in its “tier-system”, which was meant to guide the allocation of moderation resources.

“Meta was not able to adequately moderate content in the main languages spoken in Ethiopia and was slow to respond to feedback from content moderators regarding terms which should be considered harmful. This resulted in harmful content being allowed to circulate on the platform – at times even after it was reported, because it was not found to violate Meta’s community standards,” Amnesty International said.

“While content moderation alone would not have prevented all the harms stemming from Meta’s algorithmic amplification, it is an important mitigation tactic,” it said.

Separately, a recent United Nations Human Rights Council report on Ethiopia also found that despite Facebook identifying Ethiopia as “at-risk” it was slow to respond to requests for the removal of harmful content, failed to make sufficient financial investment and experienced inadequate staffing and language capabilities. A Global witness investigation also found that Facebook was “extremely poor at detecting hate speech in the main language of Ethiopia.” Whistleblower Frances Haugen previously accused Facebook of “literally fanning ethnic violence” in Ethiopia.

Meta disputed that it had failed to take measures to ensure Facebook was not used to fan violence saying: “We fundamentally disagree with the conclusions Amnesty International has reached in the report, and the allegations of wrongdoing ignore important context and facts. Ethiopia has, and continues to be, one of our highest priorities and we have introduced extensive measures to curb violating content on Facebook in the country.”

“Our safety and integrity work in Ethiopia is guided by feedback from local civil society organizations and international institutions – many of whom we continue to work with, and met in Addis Ababa this year. We employ staff with local knowledge and expertise, and continue to develop our capabilities to catch violating content in the most widely spoken languages in the country, including Amharic, Oromo, Somali and Tigrinya,” said a Meta spokesperson.

Amnesty International says the measures Meta took, like improving its content moderation and language classifier systems, and reducing reshares happened too late, and were “limited in scope as they do not “address the root cause of the threat Meta represents to human rights – the company’s data-hungry business model.”

Among its recommendations is the reformation of Meta’s “Trusted Partner” program to ensure civil society organizations and human rights defenders play a meaningful role in content-related decisions and need for human impact assessments of its platforms in Ethiopia. Additionally, it urged Meta to stop the invasive collection of personal data, and information that threatens human rights, as well as recommendations to “give users an opt-in option for the use of its content-shaping algorithms.”

However, it is not oblivious of Big Tech’s general unwillingness to put people first and called on governments to enact and enforce laws and regulations to prevent and punish companies’ abuses.

“It is more crucial than ever that states honor their obligation to protect human rights by introducing and enforcing meaningful legislation that will rein in the surveillance-based business model.”

[ad_2]

Source link

Leave a Reply