Monday, December 23, 2024

We want our services to be a place where people can express themselves freely and safely around the world. This is especially true in situations where social media can be used to spread hate, fuel tension and incite violence on the ground. That’s why we have clear rules against terrorism, hate speech and incitement to violence, and subject matter experts who help develop and enforce those rules. We also have a corporate Human Rights policy and a dedicated Human Rights team — who help us manage human rights risks and better understand how our products and technologies impact different countries and communities.

As part of our commitment to help create an environment where people can express themselves freely and safely, and following a recommendation from the Oversight Board in September 2021, we asked Business for Social Responsibility (BSR) — an independent organization with expertise in human rights — to conduct a due diligence exercise into the impact of our policies and processes in Israel and Palestine during the May 2021 escalation, including an examination of whether these policies and processes were applied without bias. Over the course of the last year, BSR conducted a detailed analysis, including engaging with groups and rights holders in Israel, Palestine, and globally, to inform its report. Today, we’re publishing these findings and our response.

Due Diligence Insights

As BSR recognized in their report, the events of May 2021 surfaced industry-wide, long-standing challenges around content moderation in conflict-affected regions, and the need to protect freedom of expression while reducing the risk of online services being used to spread hate or incite violence. The report also highlighted how managing these issues was made more difficult by the complex conditions that surround the conflict, including its social and historical dynamics, various fast-moving violent events, and the actions and activities of terrorist organizations. 

Despite these complications, BSR identified a number of areas of “good practice” in our response. These included our efforts to prioritize measures to reduce the risk of the platform being used to encourage violence or harm, including quickly establishing a dedicated Special Operations Center to respond to activity across our apps in real time. This center was staffed with expert teams, including regional experts and native speakers of Arabic and Hebrew, who worked to remove content that violated our policies, while also making sure we addressed errors in our enforcement as soon as we became aware of them. It also included our efforts to remove content that was proportionate and in line with global human rights standards. 

As well as these areas of good practice, BSR concluded that different viewpoints, nationalities, ethnicities and religions were well represented in the teams working on this at Meta. They found no evidence of intentional bias on any of these grounds among any of these employees. They also found no evidence that in developing or implementing any of our policies we sought to benefit or harm any particular community. 

That said, BSR did raise important concerns around under-enforcement of content, including inciting violence against Israelis and Jews on our platforms, and specific instances where they considered our policies and processes had an unintentional impact on Palestinian and Arab communities — primarily on their freedom of expression. BSR made 21 specific recommendations as a result of its due diligence, covering areas related to our policies, how those policies are enforced, and our efforts to provide transparency to our users. 

Our Actions in Response to the Recommendations

Since we received the final report, we’ve carefully reviewed these recommendations to help us learn where and how we can improve. Our response details our commitment to implementing 10 of the recommendations, partly implementing four, and we’re assessing the feasibility of another six. We will take no further action on one recommendation. 

There are no quick, overnight fixes to many of these recommendations, as BSR makes clear. While we have made significant changes as a result of this exercise already, this process will take time — including time to understand how some of these recommendations can best be addressed, and whether they are technically feasible. 

Here’s an update on our work to address some of the key areas identified in the report: 

Our Policies

BSR recommended that we review our policies on incitement to violence and Dangerous Individuals and Organisations (DOI) — rules we have in place that prohibit groups like terrorists, hate and criminal organizations, as defined by our policies, that proclaim a violent mission or are engaged in violence from having a presence on Facebook or Instagram. We assess these entities based on their behavior both online and offline – most significantly, their ties to violence. We have committed to implementing these recommendations, including launching a review of both these policy areas to examine how we approach political discussion of banned groups, and how we can do more to tackle content glorifying violence. As part of this comprehensive review, we will consult extensively with a broad spectrum of experts, academics, and stakeholders — not just in the region, but across the globe.

BSR also recommended that we tier the system of strikes and penalties we apply when people violate our DOI policy. We have committed to assessing the feasibility of this particular recommendation, but have already begun work to make this system simpler, more transparent, and more proportionate.  

In addition, BSR encouraged us to conduct stakeholder engagement around and ensure transparency on our US legal obligations in this area. We have committed to partly implement this recommendation. While we regularly carry out broad stakeholder engagement on these policies and how they are enforced, we rely on legal counsel and relevant sanctions authorities to understand our specific compliance obligations in this area. We agree that transparency is critically important here and through our Community Standards, we provide details of how we define terrorist groups, how we tier them, and how these tierings impact the penalties we apply to people who break our rules.

Enforcement of Our Policies

BSR made a number of recommendations focused on our approach to reviewing content in Hebrew and Arabic. 

BSR recommended that we continue work developing and deploying functioning machine learning classifiers in Hebrew. We’ve committed to implementing this recommendation, and since May 2021 have launched a Hebrew “hostile speech” classifier to help us proactively detect more violating Hebrew content. We believe this will significantly improve our capacity to handle situations like this, where we see major spikes in violating content.

BSR also recommended that we continue work to establish processes to better route potentially violating Arabic content by dialect for content review. We’re assessing the feasibility of this recommendation. We have large and diverse teams to review Arabic content, with native language skills and an understanding of the local cultural context across the region — including in Palestine. We also have systems in place which use technology to help determine what language content is in, so we can ensure it is reviewed by relevant content reviewers. We’re exploring a range of options to see how we can improve this process. This includes reviewing hiring more content reviewers with diverse dialect and language capabilities, and work to understand whether we can train our systems to better distinguish between different Arabic dialects to help route and review Arabic content.

BSR’s analysis notes that Facebook and Instagram prohibit antisemitic content as part of its hate speech policy, which doesn’t allow attacks against anyone based on their religion, or any other protected characteristic. BSR also notes that, because we don’t currently track the targets of hate speech, we’re not able to measure the prevalence of antisemitic content, and they’ve recommended that we develop a mechanism to allow us to do this. We agree it would be worthwhile to better understand the prevalence of specific types of hate speech, and we’ve committed to assessing the feasibility of this. 

In addition, BSR recommended that we adjust the processes we have in place for updating and maintaining keywords associated with designated dangerous organizations, to help prevent hashtags being blocked in error — such as the error in May 2021 that temporarily restricted people’s ability to see content on the al-Aqsa hashtag page. While we quickly fixed this issue, it never should have happened in the first place. We have already implemented this recommendation, and have established a process to ensure expert teams at Meta are now responsible for vetting and approving these keywords.

Transparency

Underpinning all of this, BSR made a series of recommendations centered on helping people better understand our policies and processes.

BSR recommended that we provide specific and granular information to people when we remove violating content and apply “strikes.” We are implementing this recommendation in part, because some people violate multiple policies at the same time — creating challenges to how specific we can be at scale. We do already provide this specific and granular information in the majority of cases, and we have started to provide it in more cases, where it is possible to do so. 

BSR also recommended that we disclose the number of formal reports received from government entities to remove content that doesn’t violate local law, but which potentially violates our Community Standards. We have committed to implementing this recommendation. We already publish a biannual report detailing how many pieces of content, by country, we restrict for violating local law as a result of a valid legal request. We are now working to expand the metrics we provide as part of this report to also include details on content removed for violating our policies following a government request.

BSR’s report is a critically important step forward for us and our work on human rights. Global events are dynamic, and so the ways in which we address safety, security and freedom of expression need to be dynamic too. Human rights assessments like these are an important way we can continue to improve our products, policies and processes. 

For more information about additional steps we have taken, you can read our response to BSR’s assessment in full here. We will continue to keep people updated on our progress in our annual Human Rights report. 

Source

0 Comments

Leave a Comment